aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1908.11315 | 2970031230 | Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing long-term security for genomic data, yet the only available tool for this purpose is GenoGuard (, 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the real-world security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (low-entropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5 of it available as side information. In the case of a harder-to-guess (high-entropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using state-of-the-art genomic sequence inference methods, obtaining up to 15 improvement in accuracy. | Long-term security. As the sensitivity of genomic data does not degrade over time, access to an individual's genome poses a threat to her descendants, even years after she has deceased. To the best of our knowledge, GenoGuard @cite_27 is the only attempt to provide long-term security. GenoGuard, reviewed in , relies on Honey Encryption @cite_35 , aiming to provide confidentiality in the presence of brute-force attacks; it only serves as a storage mechanism, i.e., it does not support selective retrieval or testing on encrypted data (as such, it is not composable'' with other techniques supporting privacy-preserving testing or data sharing). In this paper, we provide a security analysis of GenoGuard. In parallel to our work, @cite_15 recently propose attacks against probability model transforming encoders, and also evaluate them on GenoGuard. Using machine learning, they train a classifier to distinguish between the real and the decoy sequences, and exclude all decoy data for approximately 48 | {
"cite_N": [
"@cite_35",
"@cite_27",
"@cite_15"
],
"mid": [
"2952306472",
"1714926069",
"2532520288",
"2786498526"
],
"abstract": [
"Genomic datasets are often associated with sensitive phenotypes. Therefore, the leak of membership information is a major privacy risk. Genomic beacons aim to provide a secure, easy to implement, and standardized interface for data sharing by only allowing yes no queries on the presence of specific alleles in the dataset. Previously deemed secure against re-identification attacks, beacons were shown to be vulnerable despite their stringent policy. Recent studies have demonstrated that it is possible to determine whether the victim is in the dataset, by repeatedly querying the beacon for his her single nucleotide polymorphisms (SNPs). In this work, we propose a novel re-identification attack and show that the privacy risk is more serious than previously thought. Using the proposed attack, even if the victim systematically hides informative SNPs (i.e., SNPs with very low minor allele frequency -MAF-), it is possible to infer the alleles at positions of interest as well as the beacon query results with very high confidence. Our method is based on the fact that alleles at different loci are not necessarily independent. We use the linkage disequilibrium and a high-order Markov chain-based algorithm for the inference. We show that in a simulated beacon with 65 individuals from the CEU population, we can infer membership of individuals with 95 confidence with only 5 queries, even when SNPs with MAF less than 0.05 are hidden. This means, we need less than 0.5 of the number of queries that existing works require, to determine beacon membership under the same conditions. We further show that countermeasures such as hiding certain parts of the genome or setting a query budget for the user would fail to protect the privacy of the participants under our adversary model.",
"Secure storage of genomic data is of great and increasing importance. The scientific community's improving ability to interpret individuals' genetic materials and the growing size of genetic database populations have been aggravating the potential consequences of data breaches. The prevalent use of passwords to generate encryption keys thus poses an especially serious problem when applied to genetic data. Weak passwords can jeopardize genetic data in the short term, but given the multi-decade lifespan of genetic data, even the use of strong passwords with conventional encryption can lead to compromise. We present a tool, called Geno Guard, for providing strong protection for genomic data both today and in the long term. Geno Guard incorporates a new theoretical framework for encryption called honey encryption (HE): it can provide information-theoretic confidentiality guarantees for encrypted data. Previously proposed HE schemes, however, can be applied to messages from, unfortunately, a very restricted set of probability distributions. Therefore, Geno Guard addresses the open problem of applying HE techniques to the highly non-uniform probability distributions that characterize sequences of genetic data. In Geno Guard, a potential adversary can attempt exhaustively to guess keys or passwords and decrypt via a brute-force attack. We prove that decryption under any key will yield a plausible genome sequence, and that Geno Guard offers an information-theoretic security guarantee against message-recovery attacks. We also explore attacks that use side information. Finally, we present an efficient and parallelized software implementation of Geno Guard.",
"The continuous decrease in cost of molecular profiling tests is revolutionizing medical research and practice, but it also raises new privacy concerns. One of the first attacks against privacy of biological data, proposed by in 2008, showed that, by knowing parts of the genome of a given individual and summary statistics of a genome-based study, it is possible to detect if this individual participated in the study. Since then, a lot of work has been carried out to further study the theoretical limits and to counter the genome-based membership inference attack. However, genomic data are by no means the only or the most influential biological data threatening personal privacy. For instance, whereas the genome informs us about the risk of developing some diseases in the future, epigenetic biomarkers, such as microRNAs, are directly and deterministically affected by our health condition including most common severe diseases. In this paper, we show that the membership inference attack also threatens the privacy of individuals contributing their microRNA expressions to scientific studies. Our results on real and public microRNA expression data demonstrate that disease-specific datasets are especially prone to membership detection, offering a true-positive rate of up to 77 at a false-negative rate of less than 1 . We present two attacks: one relying on the L_1 distance and the other based on the likelihood-ratio test. We show that the likelihood-ratio test provides the highest adversarial success and we derive a theoretical limit on this success. In order to mitigate the membership inference, we propose and evaluate both a differentially private mechanism and a hiding mechanism. We also consider two types of adversarial prior knowledge for the differentially private mechanism and show that, for relatively large datasets, this mechanism can protect the privacy of participants in miRNA-based studies against strong adversaries without degrading the data utility too much. Based on our findings and given the current number of miRNAs, we recommend to only release summary statistics of datasets containing at least a couple of hundred individuals.",
"In this paper, we address the incremental classifier learning problem, which suffers from catastrophic forgetting. The main reason for catastrophic forgetting is that the past data are not available during learning. Typical approaches keep some exemplars for the past classes and use distillation regularization to retain the classification capability on the past classes and balance the past and new classes. However, there are four main problems with these approaches. First, the loss function is not efficient for classification. Second, there is unbalance problem between the past and new classes. Third, the size of pre-decided exemplars is usually limited and they might not be distinguishable from unseen new classes. Forth, the exemplars may not be allowed to be kept for a long time due to privacy regulations. To address these problems, we propose (a) a new loss function to combine the cross-entropy loss and distillation loss, (b) a simple way to estimate and remove the unbalance between the old and new classes , and (c) using Generative Adversarial Networks (GANs) to generate historical data and select representative exemplars during generation. We believe that the data generated by GANs have much less privacy issues than real images because GANs do not directly copy any real image patches. We evaluate the proposed method on CIFAR-100, Flower-102, and MS-Celeb-1M-Base datasets and extensive experiments demonstrate the effectiveness of our method."
]
} |
1908.10896 | 2970353712 | One of the biggest hurdles for customers when purchasing fashion online, is the difficulty of finding products with the right fit. In order to provide a better online shopping experience, platforms need to find ways to recommend the right product sizes and the best fitting products to their customers. These recommendation systems, however, require customer feedback in order to estimate the most suitable sizing options. Such feedback is rare and often only available as natural text. In this paper, we examine the extraction of product fit feedback from customer reviews using natural language processing techniques. In particular, we compare traditional methods with more recent transfer learning techniques for text classification, and analyze their results. Our evaluation shows, that the transfer learning approach ULMFit is not only comparatively fast to train, but also achieves highest accuracy on this task. The integration of the extracted information with actual size recommendation systems is left for future work. | Product fit recommendation has only been researched very recently. The main challenge is to estimate the true size of a product and the best fitting size for a customer, and match them accordingly. This has been handled in a number of different ways. In @cite_18 the true size for customers and products is estimated using a latent factor model, and recommendations are made on a similarity-based approach. In @cite_11 an extension using a Bayesian model has been proposed. A hierarchical Bayesian approach can be found in @cite_9 . In @cite_6 the size recommendation problem is tackled by learning embeddings for customers and products. The embeddings are combined in a joint space, where metric learning and prototyping is applied in order to derive good representations for the different size classes. The authors of @cite_6 also published two datasets with their paper, which we utilize in our experiments. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_6",
"@cite_11"
],
"mid": [
"2749155890",
"2788493241",
"2894397262",
"2893160345"
],
"abstract": [
"We propose a novel latent factor model for recommending product size fits Small, Fit, Large to customers. Latent factors for customers and products in our model correspond to their physical true size, and are learnt from past product purchase and returns data. The outcome for a customer, product pair is predicted based on the difference between customer and product true sizes, and efficient algorithms are proposed for computing customer and product true size values that minimize two loss function variants. In experiments with Amazon shoe datasets, we show that our latent factor models incorporating personas, and leveraging return codes show a 17-21 AUC improvement compared to baselines. In an online A B test, our algorithms show an improvement of 0.49 in percentage of Fit transactions over control.",
"Lack of calibrated product sizing in popular categories such as apparel and shoes leads to customers purchasing incorrect sizes, which in turn results in high return rates due to fi€t issues. We address the problem of product size recommendations based on customer purchase and return data. We propose a novel approach based on Bayesian logit and probit regression models with ordinal categories Small, Fit, Large to model size fits as a function of the difference between latent sizes of customers and products. We propose posterior computation based on mean-field variational inference, leveraging the Polya-Gamma augmentation for the logit prior, that results in simple updates, enabling our technique to efficiently handle large datasets. O„ur experiments with real-life shoe datasets show that our model outperforms the state of the art in 5 of 6 datasets and leads to an improvement of 17-26 in AUC over baselines when predicting size fit outcomes.",
"We introduce a hierarchical Bayesian approach to tackle the challenging problem of size recommendation in e-commerce fashion. Our approach jointly models a size purchased by a customer, and its possible return event: 1. no return, 2. returned too small 3. returned too big. Those events are drawn following a multinomial distribution parameterized on the joint probability of each event, built following a hierarchy combining priors. Such a model allows us to incorporate extended domain expertise and article characteristics as prior knowledge, which in turn makes it possible for the underlying parameters to emerge thanks to sufficient data. Experiments are presented on real (anonymized) data from millions of customers along with a detailed discussion on the efficiency of such an approach within a large scale production system.",
"Product size recommendation and fit prediction are critical in order to improve customers' shopping experiences and to reduce product return rates. Modeling customers' fit feedback is challenging due to its subtle semantics, arising from the subjective evaluation of products, and imbalanced label distribution. In this paper, we propose a new predictive framework to tackle the product fit problem, which captures the semantics behind customers' fit feedback, and employs a metric learning technique to resolve label imbalance issues. We also contribute two public datasets collected from online clothing retailers."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | To extend this notion of distance between @math and @math in , Kantorovich considered a relaxed version in @cite_4 @cite_20 . When an optimal transport map exists, the following second Wasserstein distance recovers . Further, @math is well-defined, even when an optimal transport map might not exist. In particular, it is defined as where @math denotes the set of all joint probability distributions (or equivalently, couplings) whose first and second marginals are @math and @math , respectively. Any coupling @math achieving the infimum is called the . eq:kantor_relax is also referred to as the primal formulation for Wasserstein- @math distance. Kantorovich also provided a dual formulation for eq:kantor_relax , well-known as the Kantorovich duality theorem [Theorem 1.3] villani2003topics , given by where @math denotes the constrained space of functions, defined as @math . | {
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2917201408",
"2035707149",
"2970661343",
"2528005577"
],
"abstract": [
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"We consider the optimal mass transportation problem in @math with measurably parameterized marginals under conditions ensuring the existence of a unique optimal transport map. We prove a joint measurability result for this map, with respect to the space variable and to the parameter. The proof needs to establish the measurability of some set-valued mappings, related to the support of the optimal transference plans, which we use to perform a suitable discrete approximation procedure. A motivation is the construction of a strong coupling between orthogonal martingale measures. By this we mean that, given a martingale measure, we construct in the same probability space a second one with a specified covariance measure process. This is done by pushing forward the first martingale measure through a predictable version of the optimal transport map between the covariance measures. This coupling allows us to obtain quantitative estimates in terms of the Wasserstein distance between those covariance measures.",
"Sliced Wasserstein metrics between probability measures solve the optimal transport (OT) problem on univariate projections, and average such maps across projections. The recent interest for the SW distance shows that much can be gained by looking at optimal maps between measures in smaller subspaces, as opposed to the curse-of-dimensionality price one has to pay in higher dimensions. Any transport estimated in a subspace remains, however, an object that can only be used in that subspace. We propose in this work two methods to extrapolate, from an transport map that is optimal on a subspace, one that is nearly optimal in the entire space. We prove that the best optimal transport plan that takes such \"subspace detours\" is a generalization of the Knothe-Rosenblatt transport. We show that these plans can be explicitly formulated when comparing Gaussians measures (between which the Wasserstein distance is usually referred to as the Bures or Fr 'echet distance). Building from there, we provide an algorithm to select optimal subspaces given pairs of Gaussian measures, and study scenarios in which that mediating subspace can be selected using prior information. We consider applications to NLP and evaluation of image quality (FID scores).",
"This paper defines a new transport metric over the space of nonnegative measures. This metric interpolates between the quadratic Wasserstein and the Fisher–Rao metrics and generalizes optimal transport to measures with different masses. It is defined as a generalization of the dynamical formulation of optimal transport of Benamou and Brenier, by introducing a source term in the continuity equation. The influence of this source term is measured using the Fisher–Rao metric and is averaged with the transportation term. This gives rise to a convex variational problem defining the new metric. Our first contribution is a proof of the existence of geodesics (i.e., solutions to this variational problem). We then show that (generalized) optimal transport and Hellinger metrics are obtained as limiting cases of our metric. Our last theoretical contribution is a proof that geodesics between mixtures of sufficiently close Dirac measures are made of translating mixtures of Dirac masses. Lastly, we propose a numerical scheme making use of first-order proximal splitting methods and we show an application of this new distance to image interpolation."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | As there is no easy way to ensure the feasibility of the constraints along the gradient updates, common approach is to translate the optimization into a tractable form, while sacrificing the original goal of finding the optimal transport @cite_25 . Concretely, an entropic or a quadratic regularizer is added to . This makes the dual an unconstrained problem, which can be numerically solved using Sinkhorn algorithm @cite_25 or stochastic gradient methods @cite_15 @cite_8 . The optimal transport can then be obtained from @math and @math , using the first-order optimality conditions of the Fenchel-Rockafellar's duality theorem. | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_8"
],
"mid": [
"2765207424",
"2962970351",
"2917201408",
"134812657"
],
"abstract": [
"Entropic regularization is quickly emerging as a new standard in optimal transport (OT). It enables to cast the OT computation as a differentiable and unconstrained convex optimization problem, which can be efficiently solved using the Sinkhorn algorithm. However, entropy keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic in applications where the transportation plan itself is of interest. In this paper, we explore regularizing the primal and dual OT formulations with a strongly convex term, which corresponds to relaxing the dual and primal constraints with smooth approximations. We show how to incorporate squared @math -norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. On the theoretical side, we bound the approximation error introduced by regularizing the primal and dual formulations. Our results suggest that, for the regularized primal, the approximation error can often be smaller with squared @math -norm than with entropic regularization. We showcase our proposed framework on the task of color transfer.",
"Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale problems routinely encountered in machine learning applications. These methods are able to manipulate arbitrary distributions (either discrete or continuous) by simply requiring to be able to draw samples from them, which is the typical setup in high-dimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation; (b) entropic regularization of the primal OT problem results in a smooth dual optimization optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different computational setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat the current state of the art finite dimensional OT solver (Sinkhorn's algorithm) ; (ii) when comparing a discrete distribution to a continuous density, a re-formulation (semi-discrete) of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, and is more efficient than discretizing beforehand the two densities. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems.",
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all local curvature information contained in the examples, which leads to striking improvements in both theory and practice -- sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | In this paper, we take a different approach and aim to solve the dual problem, without introducing a regularization. This idea is also considered classically in @cite_12 and more recently in @cite_21 and @cite_11 . The classical approach relies on the exact knowledge of the density, which is not available in practice. The approach in @cite_21 relies on the discrete Brenier theory which is computationally expensive and not scalable. The most related work to ours is @cite_11 , which we provide a formal comparison in . | {
"cite_N": [
"@cite_21",
"@cite_12",
"@cite_11"
],
"mid": [
"2073656615",
"2094712512",
"2052956617",
"2069278600"
],
"abstract": [
"In the present paper we analyze a class of tensor-structured preconditioners for the multidimensional second-order elliptic operators in ℝ d , d≥2. For equations in a bounded domain, the construction is based on the rank-R tensor-product approximation of the elliptic resolvent ℬ R ≈(ℒ−λ I)−1, where ℒ is the sum of univariate elliptic operators. We prove the explicit estimate on the tensor rank R that ensures the spectral equivalence. For equations in an unbounded domain, one can utilize the tensor-structured approximation of Green’s kernel for the shifted Laplacian in ℝ d , which is well developed in the case of nonoscillatory potentials. For the oscillating kernels e −i κ‖x‖ ‖x‖, x∈ℝ d , κ∈ℝ+, we give constructive proof of the rank-O(κ) separable approximation. This leads to the tensor representation for the discretized 3D Helmholtz kernel on an n×n×n grid that requires only O(κ |log e|2 n) reals for storage. Such representations can be applied to both the 3D volume and boundary calculations with sublinear cost O(n 2), even in the case κ=O(n).",
"Abstract Motivated by the study on the uniqueness problem of the coupled model, in this paper, we revisit 2d incompressible Navier–Stokes equations in bounded domains. In fact, we establish some new smoothing estimates to the Leray solution based on the spectral analysis of Stokes operator. To understand well these estimates, on one hand, we establish some new Brezis–Waigner type inequalities in general domain and in any dimension and disclose the connection between both of them. On the other hand, we show that these new estimates can be applied to prove the existence and uniqueness of the weak solutions for two coupled models: Boussinesq system with partial viscosity (no dissipation for the temperature) and Fluid Particle system, in two dimension and in bounded domains.",
"Given topological spaces X1, ..., Xn with product space X, probability measures μi on Xi together with a real function h on X define a marginal problem as well as a dual problem. Using an extended version of Choquet's theorem on capacities, an analogue of the classical duality theorem of linear programming is established, imposing only weak conditions on the topology of the spaces Xi and the measurability resp. boundedness of the function h. Applications concern, among others, measures with given support, stochastic order and general marginal problems.",
"We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | The idea of solving the semi-dual optimization problem is classically considered in @cite_12 , where the authors derive a formula for the functional derivative of the objective function with respect to @math and propose to solve the optimization problem with the gradient descent method. Their approach is based on the discretization of the space and knowledge of the explicit form of the probability density functions, that is not applicable to real-world high dimensional problems. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2033121805",
"2198227614",
"2030161963",
"1189174436"
],
"abstract": [
"We propose and analyze two dual methods based on inexact gradient information and averaging that generate approximate primal solutions for smooth convex problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first-order methods based on approximate gradients for which we prove sublinear rate of convergence. In particular, we provide a complete rate analysis and estimates on the primal feasibility violation and primal and dual suboptimality of the generated approximate primal and dual solutions. Moreover, we solve approximately the inner problems with a linearly convergent parallel coordinate descent algorithm. Our analysis relies on the Lipschitz property of the dual function and inexact dual gradients. Further, we combine these methods with dual decomposition and constraint tightening and apply this framework to linear model predictive control obtaining a suboptimal and feasible control scheme.",
"We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly nondifferentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the stepsize along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function...",
"In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate (O ( 1 k ) )), and an accelerated multistep version with convergence rate (O ( 1 k^2 ) ), where (k ) is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.",
"We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With @math random measurements of a positive semidefinite @math matrix of rank @math and condition number @math , our method is guaranteed to converge linearly to the global optimum."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | More recently, the authors in @cite_29 @cite_21 propose to learn the function @math in a semi-discrete setting, where one of the marginals is assumed to be a discrete distribution supported on a set of @math points @math , and the other marginal is assumed to have a continuous density with compact convex support @math . They show that the problem of learning the function @math is similar to the variational formulation of the Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. Moreover, they show that, in the semi-distrete setting, the optimal @math is of the form @math and simplify the problem of learning @math to the problem learning @math real numbers @math . However, the objective function involves computing polygonal partition of @math into @math convex cells, induced by the function @math , which is computationally challenging. Moreover, the learned optimal transport map @math , transports the probability distribution from each convex cell to a single point @math , which results in generalization issues. Additionally, the proposed approach is semi-discrete, and as a result, does not scale with the number of samples. | {
"cite_N": [
"@cite_29",
"@cite_21"
],
"mid": [
"2517669009",
"2738326163",
"2025702035",
"2081956522"
],
"abstract": [
"In stochastic convex optimization the goal is to minimize a convex function @math over a convex set @math where @math is some unknown distribution and each @math in the support of @math is convex over @math . The optimization is commonly based on i.i.d. samples @math from @math . A standard approach to such problems is empirical risk minimization (ERM) that optimizes @math . Here we consider the question of how many samples are necessary for ERM to succeed and the closely related question of uniform convergence of @math to @math over @math . We demonstrate that in the standard @math setting of Lipschitz-bounded functions over a @math of bounded radius, ERM requires sample size that scales linearly with the dimension @math . This nearly matches standard upper bounds and improves on @math dependence proved for @math setting by Shalev- (2009). In stark contrast, these problems can be solved using dimension-independent number of samples for @math setting and @math dependence for @math setting using other approaches. We further show that our lower bound applies even if the functions in the support of @math are smooth and efficiently computable and even if an @math regularization term is added. Finally, we demonstrate that for a more general class of bounded-range (but not Lipschitz-bounded) stochastic convex programs an infinite gap appears already in dimension 2.",
"We investigate a family of regression problems in a semi-supervised setting. The task is to assign real-valued labels to a set of @math sample points, provided a small training subset of @math labeled points. A goal of semi-supervised learning is to take advantage of the (geometric) structure provided by the large number of unlabeled data when assigning labels. We consider random geometric graphs, with connection radius @math , to represent the geometry of the data set. Functionals which model the task reward the regularity of the estimator function and impose or reward the agreement with the training data. Here we consider the discrete @math -Laplacian regularization. We investigate asymptotic behavior when the number of unlabeled points increases, while the number of training points remains fixed. We uncover a delicate interplay between the regularizing nature of the functionals considered and the nonlocality inherent to the graph constructions. We rigorously obtain almost optimal ranges on the scaling of @math for the asymptotic consistency to hold. We prove that the minimizers of the discrete functionals in random setting converge uniformly to the desired continuum limit. Furthermore we discover that for the standard model used there is a restrictive upper bound on how quickly @math must converge to zero as @math . We introduce a new model which is as simple as the original model, but overcomes this restriction.",
"We introduce a notion of quasi regularity for points with respect to the inclusion @math , where @math is a nonlinear Frechet differentiable function from @math to @math . When @math is the set of minimum points of a convex real-valued function @math on @math and @math satisfies the @math -average Lipschitz condition of Wang, we use the majorizing function technique to establish the semilocal linear quadratic convergence of sequences generated by the Gauss-Newton method (with quasi-regular initial points) for the convex composite function @math . Results are new even when the initial point is regular and @math is Lipschitz.",
"An algorithm for solving the problem: minimize @math (a convex function) subject to @math , @math , each @math a concave function, is presented. Specifically, the function [ P [ x,t,r_k ] f( x ) + r_k^ - 1 [ g_i ( x ) - t_i ] ^2 ] is minimized over all x, nonnegative t, for a strictly decreasing null sequence @math . This extends the work of T. Pietrzykowski [5]. It is proved that for every @math , there exists a finite point @math which minimizes P, and which solves the convex programming problem as @math . This algorithm is similar to the Sequential Unconstrained Minimization Technique (SUMT) [1] in that it solves the (Wolfe) dual programming problem [6]. It differs from SUMT in that (1) it approaches the optimum from the region of infeasibility (i.e., it is a relaxation technique), (2) it does not require a nonempty interior to the nonlinearly constrained region, (3) no separate feasibilit..."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Statistical analysis of learning the optimal transport map through the semi-dual optimization problem is studied in @cite_17 @cite_23 , where the authors establish a minimax convergence rate with respect to number of samples for certain classes of regular probability distributions. They also propose a procedure that achieves the optimal convergence rate, that involves representing the function @math with span of wavelet basis functions up to a certain order, and also requiring the function @math to be convex. However, they do not provide a computational algorithm to implement the procedure. | {
"cite_N": [
"@cite_23",
"@cite_17"
],
"mid": [
"2917201408",
"2946680009",
"2555632090",
"2962970351"
],
"abstract": [
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"Brenier's theorem is a cornerstone of optimal transport that guarantees the existence of an optimal transport map @math between two probability distributions @math and @math over @math under certain regularity conditions. The main goal of this work is to establish the minimax rates estimation rates for such a transport map from data sampled from @math and @math under additional smoothness assumptions on @math . To achieve this goal, we develop an estimator based on the minimization of an empirical version of the semi-dual optimal transport problem, restricted to truncated wavelet expansions. This estimator is shown to achieve near minimax optimality using new stability arguments for the semi-dual and a complementary minimax lower bound. These are the first minimax estimation rates for transport maps in general dimension.",
"We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling @math but do not address the problem of learning the underlying transport map @math linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it to out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing.",
"Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale problems routinely encountered in machine learning applications. These methods are able to manipulate arbitrary distributions (either discrete or continuous) by simply requiring to be able to draw samples from them, which is the typical setup in high-dimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation; (b) entropic regularization of the primal OT problem results in a smooth dual optimization optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different computational setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat the current state of the art finite dimensional OT solver (Sinkhorn's algorithm) ; (ii) when comparing a discrete distribution to a continuous density, a re-formulation (semi-discrete) of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, and is more efficient than discretizing beforehand the two densities. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | The approach proposed in this paper is built upon the recent work @cite_11 , where the proposal to solve the semi-dual optimization problem by representing the function @math with an ICNN appeared for the first time. The proposed procedure in @cite_11 involves solving a convex optimization problem to compute the convex conjugate @math for each sample in the batch, at each optimization iteration. This procedure becomes computationally challenging to scale to large datasets. However, in this paper, we propose a minimax formulation to learn the convex conjugate function in a scalable fashion. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2953167455",
"2157959686",
"2055400003",
"2075547019"
],
"abstract": [
"A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification.",
"In this paper we develop a new affine-invariant primal–dual subgradient method for nonsmooth convex optimization problems. This scheme is based on a self-concordant barrier for the basic feasible set. It is suitable for finding approximate solutions with certain relative accuracy. We discuss some applications of this technique including fractional covering problem, maximal concurrent flow problem, semidefinite relaxations and nonlinear online optimization. For all these problems, the rate of convergence of our method does not depend on the problem’s data.",
"In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y. E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543-547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127-152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach has @math iteration complexity for finding an @math -optimal solution to both primal and dual problems. We then discuss the application of this approach to sparse covariance selection that is approximately solved as an @math -norm penalized maximum likelihood estimation problem, and also propose a variant of this approach which has substantially outperformed the latter one in our computational experiments. We finally compare the performance of these approaches with other first-order methods, namely, Nesterov's @math smooth approximation scheme and block-coordinate descent method studied in [A. d'Aspremont, O. Banerjee, and L. El Ghaoui, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 56-66; J. Friedman, T. Hastie, and R. Tibshirani, Biostatistics, 9 (2008), pp. 432-441] for sparse covariance selection on a set of randomly generated instances. It shows that our smooth optimization approach substantially outperforms the first method above, and moreover, its variant substantially outperforms both methods above.",
"As surrogate functions of 0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0, ∞). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | There are also other alternative approaches to approximate the optimal transport map that are not based on solving the semi-dual optimization problem . @cite_24 , the authors propose to approximate the optimal transport map, through an adversarial computational procedure, by considering the dual optimization problem , and replacing the constraint with a quadratic penalty term. However, in contrast to the other regularization-based approaches such as @cite_8 , they consider a GAN architecture, and propose to take the generator, after the training is finished, as the optimal transport map. They also provide a theoretical justification for their proposal, however the theoretical justification is valid in an ideal setting where the generator has infinite capacity, the discriminator is optimal at each update step, and the cost is equal to the exact Wasserstein distance. These ideal conditions are far from being true in a practical setting. | {
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2917201408",
"2950516984",
"2766665711",
"2555632090"
],
"abstract": [
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"Computing optimal transport maps between high-dimensional and continuous distributions is a challenging problem in optimal transport (OT). Generative adversarial networks (GANs) are powerful generative models which have been successfully applied to learn maps across high-dimensional domains. However, little is known about the nature of the map learned with a GAN objective. To address this problem, we propose a generative adversarial model in which the discriminator's objective is the @math -Wasserstein metric. We show that during training, our generator follows the @math -geodesic between the initial and the target distributions. As a consequence, it reproduces an optimal map at the end of training. We validate our approach empirically in both low-dimensional and high-dimensional continuous settings, and show that it outperforms prior methods on image data.",
"In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space.",
"We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling @math but do not address the problem of learning the underlying transport map @math linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it to out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Another approach, proposed in @cite_22 , is based on a generative learning framework to approximate the optimal coupling, instead of optimal transport map. The approach involves a low-dimensional latent random variable, two generators that take the latent variable as input and map it to a high-dimensional space where the real data resides in, and two discriminators that respectively take as inputs the real data and the output of the generator. Although, the proposed approach is attractive when an optimal transport map does not exist, it is computationally expensive because it involves learning four deep neural networks, and suffers from unused capacity issues that WGAN architecture suffers from @cite_33 . | {
"cite_N": [
"@cite_22",
"@cite_33"
],
"mid": [
"2950516984",
"2917201408",
"2766665711",
"2555632090"
],
"abstract": [
"Computing optimal transport maps between high-dimensional and continuous distributions is a challenging problem in optimal transport (OT). Generative adversarial networks (GANs) are powerful generative models which have been successfully applied to learn maps across high-dimensional domains. However, little is known about the nature of the map learned with a GAN objective. To address this problem, we propose a generative adversarial model in which the discriminator's objective is the @math -Wasserstein metric. We show that during training, our generator follows the @math -geodesic between the initial and the target distributions. As a consequence, it reproduces an optimal map at the end of training. We validate our approach empirically in both low-dimensional and high-dimensional continuous settings, and show that it outperforms prior methods on image data.",
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space.",
"We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling @math but do not address the problem of learning the underlying transport map @math linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it to out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing."
]
} |
1908.10962 | 2971222260 | In this paper, we present a novel and principled approach to learn the optimal transport between two distributions, from samples. Guided by the optimal transport theory, we learn the optimal Kantorovich potential which induces the optimal transport map. This involves learning two convex functions, by solving a novel minimax optimization. Building upon recent advances in the field of input convex neural networks, we propose a new framework where the gradient of one convex function represents the optimal transport mapping. Numerical experiments confirm that we learn the optimal transport mapping. This approach ensures that the transport mapping we find is optimal independent of how we initialize the neural networks. Further, target distributions from a discontinuous support can be easily captured, as gradient of a convex function naturally models a discontinuous transport mapping. | Finally, a procedure is recently proposed to approximate the optimal transport map that is optimal only on a subspace projection instead of the entire space @cite_1 . This approach is inspired by the sliced Wasserstein distance method to approximate the Wasserstein distance @cite_16 @cite_27 . However, selection of the subspace to project on is a non-trivial task, and optimally selecting the projection is an optimization over the Grassmann manifold which is computationally challenging. | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_1"
],
"mid": [
"2970661343",
"2917201408",
"2035707149",
"2528005577"
],
"abstract": [
"Sliced Wasserstein metrics between probability measures solve the optimal transport (OT) problem on univariate projections, and average such maps across projections. The recent interest for the SW distance shows that much can be gained by looking at optimal maps between measures in smaller subspaces, as opposed to the curse-of-dimensionality price one has to pay in higher dimensions. Any transport estimated in a subspace remains, however, an object that can only be used in that subspace. We propose in this work two methods to extrapolate, from an transport map that is optimal on a subspace, one that is nearly optimal in the entire space. We prove that the best optimal transport plan that takes such \"subspace detours\" is a generalization of the Knothe-Rosenblatt transport. We show that these plans can be explicitly formulated when comparing Gaussians measures (between which the Wasserstein distance is usually referred to as the Bures or Fr 'echet distance). Building from there, we provide an algorithm to select optimal subspaces given pairs of Gaussian measures, and study scenarios in which that mediating subspace can be selected using prior information. We consider applications to NLP and evaluation of image quality (FID scores).",
"We provide a framework to approximate the 2-Wasserstein distance and the optimal transport map, amenable to efficient training as well as statistical and geometric analysis. With the quadratic cost and considering the Kantorovich dual form of the optimal transportation problem, the Brenier theorem states that the optimal potential function is convex and the optimal transport map is the gradient of the optimal potential function. Using this geometric structure, we restrict the optimization problem to different parametrized classes of convex functions and pay special attention to the class of input-convex neural networks. We analyze the statistical generalization and the discriminative power of the resulting approximate metric, and we prove a restricted moment-matching property for the approximate optimal map. Finally, we discuss a numerical algorithm to solve the restricted optimization problem and provide numerical experiments to illustrate and compare the proposed approach with the established regularization-based approaches. We further discuss practical implications of our proposal in a modular and interpretable design for GANs which connects the generator training with discriminator computations to allow for learning an overall composite generator.",
"We consider the optimal mass transportation problem in @math with measurably parameterized marginals under conditions ensuring the existence of a unique optimal transport map. We prove a joint measurability result for this map, with respect to the space variable and to the parameter. The proof needs to establish the measurability of some set-valued mappings, related to the support of the optimal transference plans, which we use to perform a suitable discrete approximation procedure. A motivation is the construction of a strong coupling between orthogonal martingale measures. By this we mean that, given a martingale measure, we construct in the same probability space a second one with a specified covariance measure process. This is done by pushing forward the first martingale measure through a predictable version of the optimal transport map between the covariance measures. This coupling allows us to obtain quantitative estimates in terms of the Wasserstein distance between those covariance measures.",
"This paper defines a new transport metric over the space of nonnegative measures. This metric interpolates between the quadratic Wasserstein and the Fisher–Rao metrics and generalizes optimal transport to measures with different masses. It is defined as a generalization of the dynamical formulation of optimal transport of Benamou and Brenier, by introducing a source term in the continuity equation. The influence of this source term is measured using the Fisher–Rao metric and is averaged with the transportation term. This gives rise to a convex variational problem defining the new metric. Our first contribution is a proof of the existence of geodesics (i.e., solutions to this variational problem). We then show that (generalized) optimal transport and Hellinger metrics are obtained as limiting cases of our metric. Our last theoretical contribution is a proof that geodesics between mixtures of sufficiently close Dirac measures are made of translating mixtures of Dirac masses. Lastly, we propose a numerical scheme making use of first-order proximal splitting methods and we show an application of this new distance to image interpolation."
]
} |
1908.10422 | 2965761667 | Abstract Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only – without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency – which revealed that our proposed dialogue rewards strongly correlate with human judgements. | black This article contributes to the literature of neural-based chatbots as follows. First, our methodology for training value-based DRL agents uses only unlabelled dialogue data. Previous work requires manual extensions to the dialogue data @cite_13 or expensive and time consuming ratings for training a reward function @cite_1 . Second, our proposed reward function strongly correlates with human judgements. Previous work has only shown moderate positive correlations between target dialogue rewards and predicted ones @cite_1 , or rely on high-level annotations requiring external and language-dependent resources typically induced from labelled data @cite_0 . Third, while previous work on DRL chatbots train a single agent @cite_1 @cite_13 , our study---confirmed by automatic and human evaluations---shows that an ensemble-based approach performs better than a counterpart single agent. The remainder of this article elaborates on these contributions. | {
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_13"
],
"mid": [
"2410983263",
"2796224816",
"2627074894",
"2513380446"
],
"abstract": [
"Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.",
"In 2015, Google's Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL's imperfections is its lack of \"exploration\" during the training process, especially when working with high-dimensional problems. In this paper, we propose a mixed strategy approach that mimics behaviors of human when interacting with environment, and create a \"thinking\" agent that allows for more efficient exploration in the DRL training process. The simulation results based on the Breakout game show that our scheme achieves a higher probability of obtaining a maximum score than does the baseline DRL algorithm, i.e., the asynchronous advantage actor-critic method. The proposed scheme therefore can be applied effectively to solving a complicated task in a real-world application.",
"We propose an online, end-to-end, neural generative conversational model for open-domain dialogue. It is trained using a unique combination of offline two-phase supervised learning and online human-in-the-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on hamming-diverse beam search for response generation and one-character user-feedback at each step. Experiments show that our model inherently promotes the generation of semantically relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.",
"This paper proposes KB-InfoBot -- a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need to interact with an external database to access real-world knowledge. Previous systems achieved this by issuing a symbolic query to the KB to retrieve entries based on their attributes. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced \"soft\" posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users. We also present a fully neural end-to-end agent, trained entirely from user feedback, and discuss its application towards personalized dialogue agents. The source code is available at this https URL"
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | Most of existing face anti-spoofing datasets only contain the RGB modality, including the two widely used PAD datasets Replay-Attack @cite_31 and CASIA-FASD @cite_50 . Even the recently released SiW @cite_26 dataset, collected with high resolution image quality, only contains RGB data. With the widespread application of face recognition in mobile phones, there are also some RGB datasets recorded by replaying face video with smartphone, such as MSU-MFSD @cite_47 , Replay-Mobile @cite_21 and OULU-NPU @cite_46 . | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_46",
"@cite_21",
"@cite_50",
"@cite_47"
],
"mid": [
"2341318667",
"2418633638",
"2728977829",
"2551249768"
],
"abstract": [
"Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.",
"With the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking, security against the face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays, and 3D masks of a face. We address the problem of face spoof detection against the print (photo) and replay (photo or video) attacks based on the analysis of image distortion ( e.g. , surface reflection, moire pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is smartphone unlock, given that the growing number of smartphones have the face unlock and mobile payment capabilities. We build an unconstrained smartphone spoof attack database (MSU USSA) containing more than 1000 subjects. Both the print and replay attacks are captured using the front and rear cameras of a Nexus 5 smartphone. We analyze the image distortion of the print and replay attacks using different: 1) intensity channels (R, G, B, and grayscale); 2) image regions (entire image, detected face, and facial component between nose and chin); and 3) feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on the public-domain Idiap Replay-Attack, CASIA FASD, and MSU-MFSD databases, and the MSU USSA database show that the proposed approach is effective in face spoof detection for both the cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants show that the proposed approach works very well in real application scenarios.",
"The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.",
"The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | As attack techniques are constantly upgraded, some new types of presentation attacks have emerged, , 3D @cite_0 and silicone masks @cite_38 . These attacks are more realistic than traditional 2D attacks. Therefore, the drawbacks of visible cameras are revealed when facing these realistic face masks. Fortunately, some new sensors have been introduced to provide more possibilities for face PAD methods, such as depth cameras, muti-spectral cameras and infrared light cameras. Kim al @cite_4 introduce a new dataset to distinguish between the facial skin and mask materials by exploiting their reflectance. Kose al @cite_43 propose a 2D+3D face mask attack dataset to study the effects of mask attacks. However, associated data has not been made public. 3DMAD @cite_0 is the first publicly available 3D masks dataset, which is recorded using Microsoft Kinect sensor and consists of Depth and RGB modalities. Another multi-modal face PAD dataset is Msspoof @cite_6 , containing visible and near-infrared images of real accesses and printed spoofing attacks with @math objects. | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_6",
"@cite_0",
"@cite_43"
],
"mid": [
"2125320497",
"2887396754",
"2011016023",
"2728977829"
],
"abstract": [
"The problem of detecting face spoofing attacks (presentation attacks) has recently gained a well-deserved popularity. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. In this paper, we inspect the spoofing potential of subject-specific 3D facial masks for 2D face recognition. Additionally, we analyze Local Binary Patterns based coun-termeasures using both color and depth data, obtained by Kinect. For this purpose, we introduce the 3D Mask Attack Database (3DMAD), the first publicly available 3D spoofing database, recorded with a low-cost depth camera. Extensive experiments on 3DMAD show that easily attainable facial masks can pose a serious threat to 2D face recognition systems and LBP is a powerful weapon to eliminate it.",
"We investigate the vulnerability of convolutional neural network (CNN) based face-recognition (FR) systems to presentation attacks (PA) performed using custom-made silicone masks. Previous works have studied the vulnerability of CNN-FR systems to 2D PAs such as print-attacks, or digital- video replay attacks, and to rigid 3D masks. This is the first study to consider PAs performed using custom-made flexible silicone masks. Before embarking on research on detecting a new variety of PA, it is important to estimate the seriousness of the threat posed by the type of PA. In this work we demonstrate that PAs using custom silicone masks do pose a serious threat to state-of-the-art FR systems. Using a new dataset based on six custom silicone masks, we show that the vulnerability of each FR system in this study is at least 10 times higher than its false match rate. We also propose a simple but effective presentation attack detection method, based on a low-cost thermal camera.",
"There are several types of spoofing attacks to face recognition systems such as photograph, video or mask attacks. Recent studies show that face recognition systems are vulnerable to these attacks. In this paper, a countermeasure technique is proposed to protect face recognition systems against mask attacks. To the best of our knowledge, this is the first time a countermeasure is proposed to detect mask attacks. The reason for this delay is mainly due to the unavailability of public mask attacks databases. In this study, a 2D+3D face mask attacks database is used which is prepared for a research project in which the authors are all involved. The performance of the countermeasure is evaluated on both the texture images and the depth maps, separately. The results show that the proposed countermeasure gives satisfactory results using both the texture images and the depth maps. The performance of the countermeasure is observed to be slight better when the technique is applied on texture images instead of depth maps, which proves that face texture provides more information than 3D face shape characteristics using the proposed approach.",
"The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | Face anti-spoofing has been studied for decades. Some previous works @cite_34 @cite_51 @cite_1 @cite_53 attempt to detect the evidence of liveness ( , eye-blinking). Another works are based on contextual @cite_7 @cite_9 and moving @cite_22 @cite_44 @cite_57 information. To improve the robustness to illumination variation, some algorithms adopt HSV and YCbCr color spaces @cite_11 @cite_39 , as well as Fourier spectrum @cite_56 . All of these methods use handcrafted features, such as LBP @cite_15 @cite_13 @cite_23 @cite_24 , HoG @cite_23 @cite_24 @cite_19 and GLCM @cite_19 . They achieve a relatively satisfactory performance on small public face anti-spoofing datasets. | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_7",
"@cite_53",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_44",
"@cite_57",
"@cite_56",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_51",
"@cite_11"
],
"mid": [
"2510926985",
"2145426126",
"2522438482",
"1998567523"
],
"abstract": [
"A multi-cues integration framework is proposed using a hierarchical neural network.Bottleneck representations are effective in multi-cues feature fusion.Shearlet is utilized to perform face image quality assessment.Motion-based face liveness features are automatically learned using autoencoders. Many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication. However, there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios. In order to improve the generalization ability of face anti-spoofing approaches, an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed, which can fuse image quality cues and motion cues for liveness detection. Shearlet is utilized to develop an image quality-based liveness feature. Dense optical flow is utilized to extract motion-based liveness features. A bottleneck feature fusion strategy can integrate different liveness features effectively. The proposed approach was evaluated on three public face anti-spoofing databases. A half total error rate (HTER) of 0 and an equal error rate (EER) of 0 were achieved on both REPLAY-ATTACK database and 3D-MAD database. An EER of 5.83 was achieved on CASIA-FASD database.",
"This paper presents a face liveness detection system against spoofing with photographs, videos, and 3D models of a valid user in a face recognition system. Anti-spoofing clues inside and outside a face are both exploited in our system. The inside-face clues of spontaneous eyeblinks are employed for anti-spoofing of photographs and 3D models. The outside-face clues of scene context are used for anti-spoofing of video replays. The system does not need user collaborations, i.e. it runs in a non-intrusive manner. In our system, the eyeblink detection is formulated as an inference problem of an undirected conditional graphical framework which models contextual dependencies in blink image sequences. The scene context clue is found by comparing the difference of regions of interest between the reference scene image and the input one, which is based on the similarity computed by local binary pattern descriptors on a series of fiducial points extracted in scale space. Extensive experiments are carried out to show the effectiveness of our system.",
"With the wide applications of user authentication based on face recognition, face spoof attacks against face recognition systems are drawing increasing attentions. While emerging approaches of face antispoofing have been reported in recent years, most of them limit to the non-realistic intra-database testing scenarios instead of the cross-database testing scenarios. We propose a robust representation integrating deep texture features and face movement cue like eye-blink as countermeasures for presentation attacks like photos and replays. We learn deep texture features from both aligned facial images and whole frames, and use a frame difference based approach for eye-blink detection. A face video clip is classified as live if it is categorized as live using both cues. Cross-database testing on public-domain face databases shows that the proposed approach significantly outperforms the state-of-the-art.",
"Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detection method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100 accuracy on Idiap print-attack database and the best performance on self-collected face anti-spoofing database."
]
} |
1908.10654 | 2970246034 | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects ( @math ) and modalities ( @math ), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and modalities. Specifically, it consists of @math subjects with @math videos and each sample has @math modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training validation testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at this https URL | CNN-based methods @cite_36 @cite_40 @cite_20 @cite_5 @cite_52 @cite_3 have been presented recently in the face PAD community. They treat face PAD as a binary classification problem and achieve remarkable improvements in the intra-testing. Liu al @cite_26 design a network architecture to leverage two auxiliary information (Depth map and rPPG signal) as supervision. Amin al @cite_3 introduce a new perspective for solving the face anti-spoofing by inversely decomposing a spoof face into the live face and the spoof noise pattern. However, they exhibit a poor generalization ability in the cross-testing due to the over-fitting to training data. This problem remains open, although some works @cite_40 @cite_20 adopt transfer learning to train a CNN model from ImageNet @cite_18 . These works show the need of a larger PAD dataset. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_36",
"@cite_52",
"@cite_3",
"@cite_40",
"@cite_5",
"@cite_20"
],
"mid": [
"2963656031",
"2417750831",
"2792156255",
"2951191545"
],
"abstract": [
"Face anti-spoofing is crucial to prevent face recognition systems from a security breach. Previous deep learning approaches formulate face anti-spoofing as a binary classification problem. Many of them struggle to grasp adequate spoofing cues and generalize poorly. In this paper, we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues. A CNN-RNN model is learned to estimate the face depth with pixel-wise supervision, and to estimate rPPG signals with sequence-wise supervision. The estimated depth and rPPG are fused to distinguish live vs. spoof faces. Further, we introduce a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations. Experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing.",
"This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.",
"Abstract Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods.",
"This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face pro- posal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by prun- ing and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state- of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of prede- fined anchor boxes in the region proposals network (RPN) by exploit- ing a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth l1 -losses [14] of both the facial key-points and the face bounding boxes. In ex- periments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | One way to visualize evidence of a class using deep learning is to perform backpropagation of the outputs of a trained classifier @cite_1 . @cite_5 , for example, a model is trained to predict the presence of 14 diseases in chest x-rays, and class activation maps @cite_2 are used to show what regions of the x-rays have a larger influence on the classifier's decision. However, as shown in @cite_7 , these methods suffer from low resolution or from highlighting limited regions of the original images. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_7",
"@cite_2"
],
"mid": [
"2806321514",
"2084220915",
"1570613334",
"2774301822"
],
"abstract": [
"Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.",
"In this work, we examine the strength of deep learning approaches for pathology detection in chest radiographs. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of CNN learned from a non-medical dataset to identify different types of pathologies in chest x-rays. We tested our algorithm on a 433 image dataset. The best performance was achieved using CNN and GIST features. We obtained an area under curve (AUC) of 0.87–0.94 for the different pathologies. The results demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. This is a first-of-its-kind experiment that shows that Deep learning with ImageNet, a large scale non-medical image database may be a good substitute to domain specific representations, which are yet to be available, for general medical image recognition tasks.",
"Medical datasets are often highly imbalanced, over representing common medical problems, and sparsely representing rare problems. We propose simulation of pathology in images to overcome the above limitations. Using chest Xrays as a model medical image, we implement a generative adversarial network (GAN) to create artificial images based upon a modest sized labeled dataset. We employ a combination of real and artificial images to train a deep convolutional neural network (DCNN) to detect pathology across five classes of disease. We furthermore demonstrate that augmenting the original imbalanced dataset with GAN generated images improves performance of chest pathology classification using the proposed DCNN in comparison to the same DCNN trained with the original dataset alone. This improved performance is largely attributed to balancing of the dataset using GAN generated images, where image classes that are lacking in example images are preferentially augmented."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | @cite_7 , researchers visualize what brain MRIs of patients with mild cognitive impairment would look like if they developed Alzheimer's disease, generating disease effect maps. To solve problems with other visualization methods, they propose an adversarial setup. A generator is trained to modify an input image which fools a discriminator. The modifications the generator outputs are used as visualization of evidence of one class. This setup inspires our method. However, instead of classification labels, we use regression values and a novel loss function. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2963635991",
"2782815868",
"2795995837",
"2964077693"
],
"abstract": [
"Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects.",
"Computer-aided early diagnosis of Alzheimers Disease (AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the subject of extensive research in recent years. Some recent studies have shown promising results in the AD and MCI determination using structural and functional Magnetic Resonance Imaging (sMRI, fMRI), Positron Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) modalities. Furthermore, fusion of imaging modalities in a supervised machine learning framework has shown promising direction of research. In this paper we first review major trends in automatic classification methods such as feature extraction based methods as well as deep learning approaches in medical image analysis applied to the field of Alzheimer's Disease diagnostics. Then we propose our own algorithm for Alzheimer's Disease diagnostics based on a convolutional neural network and sMRI and DTI modalities fusion on hippocampal ROI using data from the Alzheimers Disease Neuroimaging Initiative (ADNI) database (this http URL). Comparison with a single modality approach shows promising results. We also propose our own method of data augmentation for balancing classes of different size and analyze the impact of the ROI size on the classification results as well.",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 ."
]
} |
1908.10468 | 2971069003 | Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available at this https URL. | There have been other works on generating visual attribution for regression. @cite_4 , start by training a GAN on a large dataset of frontal x-rays, and then train an encoder that maps from an x-ray to its latent space vector. Finally, train a small model for regression that receives the latent vector of the images from a smaller dataset and outputs a value which is used for diagnosing congestive heart failure. To interpret their model, they backpropagate through the small regression model, taking steps in the latent space to reach the threshold of diagnosis, and generate the image associated with the new diagnosis. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2774301822",
"2963105487",
"2018417093",
"2000828905"
],
"abstract": [
"Medical datasets are often highly imbalanced, over representing common medical problems, and sparsely representing rare problems. We propose simulation of pathology in images to overcome the above limitations. Using chest Xrays as a model medical image, we implement a generative adversarial network (GAN) to create artificial images based upon a modest sized labeled dataset. We employ a combination of real and artificial images to train a deep convolutional neural network (DCNN) to detect pathology across five classes of disease. We furthermore demonstrate that augmenting the original imbalanced dataset with GAN generated images improves performance of chest pathology classification using the proposed DCNN in comparison to the same DCNN trained with the original dataset alone. This improved performance is largely attributed to balancing of the dataset using GAN generated images, where image classes that are lacking in example images are preferentially augmented.",
"Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an “inverse model,” a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion , to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website ( https: github.com ToniCreswell InvertingGAN ).",
"A novel framework for image filtering based on regression is presented. Regression is a supervised technique from pattern recognition theory in which a mapping from a number of input variables (features) to a continuous output variable is learned from a set of examples from which both input and output are known. We apply regression on a pixel level. A new, substantially different, image is estimated from an input image by computing a number of filtered input images (feature images) and mapping these to the desired output for every pixel in the image. The essential difference between conventional image filters and the proposed regression filter is that the latter filter is learned from training data. The total scheme consists of preprocessing, feature computation, feature extraction by a novel dimensionality reduction scheme designed specifically for regression, regression by k-nearest neighbor averaging, and (optionally) iterative application of the algorithm. The framework is applied to estimate the bone and soft-tissue components from standard frontal chest radiographs. As training material, radiographs with known soft-tissue and bone components, obtained by dual energy imaging, are used. The results show that good correlation with the true soft-tissue images can be obtained and that the scheme can be applied to images from a different source with good results. We show that bone structures are effectively enhanced and suppressed and that in most soft-tissue images local contrast of ribs decreases more than contrast between pulmonary nodules and their surrounding, making them relatively more pronounced.",
"Abstract We present a machine learning approach called shape regression machine (SRM) for efficient segmentation of an anatomic structure that exhibits a deformable shape in a medical image, e.g., left ventricle endocardial wall in an echocardiogram. The SRM achieves efficient segmentation via statistical learning of the interrelations among shape, appearance, and anatomy , which are exemplified by an annotated database. The SRM is a two-stage approach. In the first stage that estimates a rigid shape to solve an automatic initialization problem, it derives a regression solution to object detection that needs just one scan in principle and a sparse set of scans in practice, avoiding the exhaustive scanning required by the state-of-the-art classification-based detection approach while yielding comparable detection accuracy. In the second stage that estimates the nonrigid shape, it again learns a nonlinear regressor to directly associate nonrigid shape with image appearance. The underpinning of both stages is a novel image-based boosting ridge regression (IBRR) method that enables multivariate, nonlinear modeling and accommodates fast evaluation. We demonstrate the efficiency and effectiveness of the SRM using experiments on segmenting the left ventricle endocardium from a B-mode echocardiogram of apical four chamber view. The proposed algorithm is able to automatically detect and accurately segment the LV endocardial border in about 120 ms ."
]
} |
1908.10398 | 2912215636 | Abstract The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games—and use the game of ‘Noughts and Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the Pepper robot confirms that highly accurate visual perception is required for successful game play. | There is a similarly limited amount of previous work on humanoid robots playing games against human opponents. Notable exceptions include @cite_18 , where the DB humanoid robot learns to play air hockey using a Nearest Neighbour classifier; @cite_9 , where the Nico humanoid torso robot plays the game of rock-paper-scissors using a Wizzard of Oz' setting; @cite_39 , where the Sky humanoid robot plays catch and juggling using inverse kinematics and induced parameters with least squares linear regression; @cite_28 , where the Nao robot plays a quiz game, an arm imitation game, and a dance game using tabular reinforcement learning; @cite_33 , where the Genie humanoid robot plays the poker game using a Wizard of Oz' setting; and @cite_7 , where the NAO robot plays Checkers using a MinMax search tree. Most of these robots only exhibit non-verbal abilities and are either teleoperated or based on heuristic methods, which suggests that verbal abilities in autonomous trainable robots playing games are underdeveloped. Apart from @cite_4 @cite_24 , we are not aware of any other previous work in humanoid robots playing social games against human opponents and trained with deep learning methods. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_39",
"@cite_24"
],
"mid": [
"2292128556",
"2141098361",
"1963832262",
"2091713555"
],
"abstract": [
"We show that setting a reasonable frame skip can be critical to the performance of agents learning to play Atari 2600 games. In all of the six games in our experiments, frame skip is a strong determinant of success. For two of these games, setting a large frame skip leads to state-of-the-art performance. The rate at which an agent interacts with its environment may be critical to its success. In the Arcade Learning Environment (ALE) ( 2013) games run at sixty frames per second, and agents can submit an action at every frame. Frame skip is the number of frames an action is repeated before a new action is selected. Existing reinforcement learning (RL) approaches use static frame skip: HNEAT ( 2013) uses a frame skip of 0; DQN ( 2013) uses a frame skip of 2-3; SARSA and planning approaches ( 2013) use a frame skip of 4. When action selection is computationally intensive, setting a higher frame skip can significantly decrease the time it takes to simulate an episode, at the cost of missing opportunities that only exist at a finer resolution. A large frame skip can also prevent degenerate super-human-reflex strategies, such as those described by for Bowling, Kung Fu Master, Video Pinball and Beam Rider. We show that in addition to these advantages agents that act with high frame skip can actually learn faster with respect to the number of training episodes than those that skip no frames. We present results for six of the seven games covered by : three (Beam Rider, Breakout and Pong) for which DQN was able to achieve near- or superhuman performance, and three (Q*Bert, Space Invaders and Seaquest) for which all RL approaches are far from human performance. These latter games were understood to be difficult because they require ‘strategy that extends over long time scales.’ In our experiments, setting a large frame skip was critical to achieving state-of-the-art performance in two of these games: Space Invaders and Q*Bert. More generally, the frame skip parameter was a strong determinant of performance in all six games. Our learning framework is a variant of Enforced Subpopulations (ESP) (Gomez and Miikkulainen 1997), a neuroevolution approach that has been successfully imple",
"Recent advances in the field of humanoid robotics increase the complexity of the tasks that such robots can perform. This makes it increasingly difficult and inconvenient to program these tasks manually. Furthermore, humanoid robots, in contrast to industrial robots, should in the distant future behave within a social environment. Therefore, it must be possible to extend the robot's abilities in an easy and natural way. To address these requirements, this work investigates the topic of imitation learning of motor skills. The focus lies on providing a humanoid robot with the ability to learn new bi-manual tasks through the observation of object trajectories. For this, an imitation learning framework is presented, which allows the robot to learn the important elements of an observed movement task by application of probabilistic encoding with Gaussian Mixture Models. The learned information is used to initialize an attractor-based movement generation algorithm that optimizes the reproduced movement towards the fulfillment of additional criteria, such as collision avoidance. Experiments performed with the humanoid robot ASIMO show that the proposed system is suitable for transferring information from a human demonstrator to the robot. These results provide a good starting point for more complex and interactive learning tasks.",
"We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).",
"Learning to perform household tasks is a key step towards developing cognitive service robots. This requires that robots are capable of discovering how to use human-designed products. In this paper, we propose an active learning approach for acquiring object affordances and manipulation skills in a bottom-up manner. We address affordance learning in continuous state and action spaces without manual discretization of states or exploratory motor primitives. During exploration in the action space, the robot learns a forward model to predict action effects. It simultaneously updates the active exploration policy through reinforcement learning, whereby the prediction error serves as the intrinsic reward. By using the learned forward model, motor skills are obtained to achieve goal states of an object. We demonstrate through real-world experiments that a humanoid robot NAO is able to autonomously learn how to manipulate two types of garbage cans with lids that need to be opened and closed by different motor skills."
]
} |
1908.10398 | 2912215636 | Abstract The deep supervised and reinforcement learning paradigms (among others) have the potential to endow interactive multimodal social robots with the ability of acquiring skills autonomously. But it is still not very clear yet how they can be best deployed in real world applications. As a step in this direction, we propose a deep learning-based approach for efficiently training a humanoid robot to play multimodal games—and use the game of ‘Noughts and Crosses’ with two variants as a case study. Its minimum requirements for learning to perceive and interact are based on a few hundred example images, a few example multimodal dialogues and physical demonstrations of robot manipulation, and automatic simulations. In addition, we propose novel algorithms for robust visual game tracking and for competitive policy learning with high winning rates, which substantially outperform DQN-based baselines. While an automatic evaluation shows evidence that the proposed approach can be easily extended to new games with competitive robot behaviours, a human evaluation with 130 humans playing with the Pepper robot confirms that highly accurate visual perception is required for successful game play. | In the remainder of the article we describe a deep learning-based approach for efficiently training a robot with the ability of behaving with reasonable performance in a near real world deployment. In particular, we measure the effectiveness of neural-based game move interpretation and the effectiveness of Deep Q-Networks (DQN) @cite_16 for interactive social robots. Field trial results show that the proposed approach can induce reasonable and competitive behaviours, especially when they are not affected by unseen noisy conditions. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2771590356",
"2195446438",
"2726005894",
"2791863300"
],
"abstract": [
"Deep reinforcement learning for interactive multimodal robots is attractive for endowing machines with trainable skill acquisition. But this form of learning still represents several challenges. The challenge that we focus in this paper is effective policy learning. To address that, in this paper we compare the Deep Q-Networks (DQN) method against a variant that aims for stronger decisions than the original method by avoiding decisions with the lowest negative rewards. We evaluated our baseline and proposed algorithms in agents playing the game of Noughts and Crosses with two grid sizes (3×3 and 5×5). Experimental results show evidence that our proposed method can lead to more effective policies than the baseline DQN method, which can be used for training interactive social robots.",
"A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by \"soft\" and \"hard\" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.",
"This research describes a study into the ability of a state of the art reinforcement learning algorithm to learn to perform multiple tasks. We demonstrate that the limitation of learning to performing two tasks can be mitigated with a competitive training method. We show that this approach results in improved generalization of the system when performing unforeseen tasks. The learning agent assessed is an altered version of the DeepMind deep Q–learner network (DQN), which has been demonstrated to outperform human players for a number of Atari 2600 games. The key findings of this paper is that there were significant degradations in performance when learning more than one game, and how this varies depends on both similarity and the comparative complexity of the two games.",
"Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | Top-down methods @cite_19 @cite_26 @cite_28 @cite_0 @cite_21 @cite_16 @cite_7 @cite_30 detect a single person keypoints within a person bounding box. The person bounding boxes are usually generated by an object detector @cite_3 @cite_18 @cite_6 . Mask R-CNN @cite_0 directly adds a keypoint detection branch on Faster R-CNN @cite_3 and reuses features after ROIPooling. G-RMI @cite_28 and the following methods further break top-down methods into two steps and use separate models for person detection and pose estimation. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16"
],
"mid": [
"1864464506",
"2951191545",
"2578797046",
"2949359214"
],
"abstract": [
"Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.",
"This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face pro- posal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by prun- ing and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state- of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of prede- fined anchor boxes in the region proposals network (RPN) by exploit- ing a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth l1 -losses [14] of both the facial key-points and the face bounding boxes. In ex- periments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.",
"We propose a method for multi-person detection and 2-D pose estimation that achieves state-of-art results on the challenging COCO keypoints task. It is a simple, yet powerful, top-down approach consisting of two stages. In the first stage, we predict the location and scale of boxes which are likely to contain people, for this we use the Faster RCNN detector. In the second stage, we estimate the keypoints of the person potentially contained in each proposed bounding box. For each keypoint type we predict dense heatmaps and offsets using a fully convolutional ResNet. To combine these outputs we introduce a novel aggregation procedure to obtain highly localized keypoint predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression (NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based confidence score estimation, instead of box-level scoring. Trained on COCO data alone, our final system achieves average precision of 0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming the winner of the 2016 COCO keypoints challenge and other recent state-of-art. Further, by using additional in-house labeled data we obtain an even higher average precision of 0.685 on the test-dev set and 0.673 on the test-standard set, more than 5 absolute improvement compared to the previous best performing method on the same dataset.",
"We propose a method for multi-person detection and 2-D pose estimation that achieves state-of-art results on the challenging COCO keypoints task. It is a simple, yet powerful, top-down approach consisting of two stages. In the first stage, we predict the location and scale of boxes which are likely to contain people; for this we use the Faster RCNN detector. In the second stage, we estimate the keypoints of the person potentially contained in each proposed bounding box. For each keypoint type we predict dense heatmaps and offsets using a fully convolutional ResNet. To combine these outputs we introduce a novel aggregation procedure to obtain highly localized keypoint predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression (NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based confidence score estimation, instead of box-level scoring. Trained on COCO data alone, our final system achieves average precision of 0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming the winner of the 2016 COCO keypoints challenge and other recent state-of-art. Further, by using additional in-house labeled data we obtain an even higher average precision of 0.685 on the test-dev set and 0.673 on the test-standard set, more than 5 absolute improvement compared to the previous best performing method on the same dataset."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | Bottom-up methods @cite_11 @cite_27 @cite_8 @cite_17 @cite_12 detect identity-free body joints for all the persons in an image and then group them into individuals. OpenPose @cite_17 uses a two-branch multi-stage netork with one branch for heatmap prediction and one branch for grouping. OpenPose uses a grouping method named part affinity field which learns a 2D vector field linking two keypoints. Grouping is done by calculating line integral between two keypoints and group the pair with the largest integral. Newell @cite_12 use stacked hourglass network @cite_30 for both heatmap prediction and grouping. Grouping is done by a method named associate embedding, which assigns each keypoint with a tag'' (a vector representation) and groups keypoints based on the @math distance between tag vectors. PersonLab @cite_1 uses dilated ResNet @cite_5 and groups keypoints by directly learning a 2D offset field for each pair of keypoints. | {
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_8",
"@cite_1",
"@cite_27",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"1864464506",
"2559085405",
"2951856387",
"2962773068"
],
"abstract": [
"Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"We present a box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. The proposed PersonLab model tackles both semantic-level reasoning and object-part associations using part-based modeling. Our model employs a convolutional network which learns to detect individual keypoints and predict their relative displacements, allowing us to group keypoints into person pose instances. Further, we propose a part-induced geometric embedding descriptor which allows us to associate semantic person pixels with their corresponding person instance, delivering instance-level person segmentations. Our system is based on a fully-convolutional architecture and allows for efficient inference, with runtime essentially independent of the number of people present in the scene. Trained on COCO data alone, our system achieves COCO test-dev keypoint average precision of 0.665 using single-scale inference and 0.687 using multi-scale inference, significantly outperforming all previous bottom-up pose estimation systems. We are also the first bottom-up method to report competitive results for the person class in the COCO instance segmentation task, achieving a person category average precision of 0.417."
]
} |
1908.10357 | 2971237743 | In this paper, we are interested in bottom-up multi-person human pose estimation. A typical bottom-up pipeline consists of two main steps: heatmap prediction and keypoint grouping. We mainly focus on the first step for improving heatmap prediction accuracy. We propose Higher-Resolution Network (HigherHRNet), which is a simple extension of the High-Resolution Network (HRNet). HigherHRNet generates higher-resolution feature maps by deconvolving the high-resolution feature maps outputted by HRNet, which are spatially more accurate for small and medium persons. Then, we build high-quality multi-level features and perform multi-scale pose prediction. The extra computation overhead is marginal and negligible in comparison to existing bottom-up methods that rely on multi-scale image pyramids or large input image size to generate accurate pose heatmaps. HigherHRNet surpasses all existing bottom-up methods on the COCO dataset without using multi-scale test. The code and models will be released. | There are mainly 4 methods to generate high resolution feature maps. (1) Encoder-decoder @cite_30 @cite_0 @cite_7 @cite_32 @cite_13 @cite_23 @cite_29 captures the context information in the encoder path and recover high resolution features in the decoder path. The decoder usually contains a sequence of bilinear upsample operations with skip connections from encoder features with the same resolution. (2) Dilated convolution @cite_14 @cite_31 @cite_2 @cite_15 @cite_33 @cite_24 @cite_34 @cite_9 ( atrous'' convolution) is used to remove several stride convolutions max poolings to preserve feature map resolution. Dilated convolution prevents losing spatial information but introduces more computational cost. (3) Deconvolution (transposed convolution) @cite_19 is used in sequence at the end of a network to efficiently increase feature map resolution. SimpleBaseline @cite_19 demonstrates that deconvolution can generate high quality feature maps for heatmap prediction. (4) Recently, a High-Resolution Network (HRNet) @cite_26 is proposed as an efficient way to keep a high resolution pass throughout the network. HRNet @cite_26 consists of multiple branches with different resolutions. Lower resolution branches capture contextual information and higher resolution branches preserve spatial information. With multi-scale fusions between branches, HRNet @cite_26 can generate high resolution feature maps with rich semantic. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_15",
"@cite_29",
"@cite_9",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2785325870",
"2962742544",
"2963727650",
"2883996939"
],
"abstract": [
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.",
"Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7 mIoU on PASCAL-Context, 85.9 mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45 , which is comparable with state-of-the-art approaches with over 10A— more layers. The source code for the complete system are publicly available1.",
"The Reference-based Super-resolution (RefSR) super-resolves a low-resolution (LR) image given an external high-resolution (HR) reference image, where the reference image and LR image share similar viewpoint but with significant resolution gap ( (8 )). Existing RefSR methods work in a cascaded way such as patch matching followed by synthesis pipeline with two independently defined objective functions, leading to the inter-patch misalignment, grid effect and inefficient optimization. To resolve these issues, we present CrossNet, an end-to-end and fully-convolutional deep neural network using cross-scale warping. Our network contains image encoders, cross-scale warping layers, and fusion decoder: the encoder serves to extract multi-scale features from both the LR and the reference images; the cross-scale warping layers spatially aligns the reference feature map with the LR feature map; the decoder finally aggregates feature maps from both domains to synthesize the HR output. Using cross-scale warping, our network is able to perform spatial alignment at pixel-level in an end-to-end fashion, which improves the existing schemes [1, 2] both in precision (around 2 dB–4 dB) and efficiency (more than 100 times faster)."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | Before deep learning became popular, most of the traditional CV algorithm variants apply shallow hand-crafted features to solve action recognition. Improved Dense Trajectories (IDT) @cite_36 which uses densely sampled trajectory features indicates that the temporal information could be processed differently from that of spatial information. Instead of extending the Harris corner detector into 3D, it utilizes the warp optical flow field to obtain some trajectories and eliminate the effects of camera motion in the video sequence. For each tracker corner hand-crafted features, like HOF, HOG, and MBH, are extracted along the trajectory. Despite their excellent performance, IDT and its improvements @cite_2 , @cite_37 , @cite_6 are still computationally formidable and become intractable on large-scale datasets. | {
"cite_N": [
"@cite_36",
"@cite_37",
"@cite_6",
"@cite_2"
],
"mid": [
"914561379",
"2081773958",
"2105101328",
"2169251375"
],
"abstract": [
"This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words (BOW) histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to BOW encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results.",
"In this paper we propose a novel method for human action recognition based on boosted key-frame selection and correlated pyramidal motion feature representations. Instead of using an unsupervised method to detect interest points, a Pyramidal Motion Feature (PMF), which combines optical flow with a biologically inspired feature, is extracted from each frame of a video sequence. The AdaBoost learning algorithm is then applied to select the most discriminative frames from a large feature pool. In this way, we obtain the top-ranked boosted frames of each video sequence as the key frames which carry the most representative motion information. Furthermore, we utilise the correlogram which focuses not only on probabilistic distributions within one frame but also on the temporal relationships of the action sequence. In the classification phase, a Support-Vector Machine (SVM) is adopted as the final classifier for human action recognition. To demonstrate generalizability, our method has been systematically tested on a variety of datasets and shown to be more effective and accurate for action recognition compared to the previous work. We obtain overall accuracies of: 95.5 , 93.7 , and 36.5 with our proposed method on the KTH, the multiview IXMAS and the challenging HMDB51 datasets, respectively.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"We study the problem of action recognition from depth sequences captured by depth cameras, where noise and occlusion are common problems because they are captured with a single commodity camera. In order to deal with these issues, we extract semi-local features called random occupancy pattern ROP features, which employ a novel sampling scheme that effectively explores an extremely large sampling space. We also utilize a sparse coding approach to robustly encode these features. The proposed approach does not require careful parameter tuning. Its training is very fast due to the use of the high-dimensional integral image, and it is robust to the occlusions. Our technique is evaluated on two datasets captured by commodity depth cameras: an action dataset and a hand gesture dataset. Our classification results are superior to those obtained by the state of the art approaches on both datasets."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | An activate research which devotes to the design of deep networks for video representation learning has been trying to devise effective ConvNet architectures @cite_40 @cite_3 @cite_19 @cite_3 @cite_23 . @cite_40 attempt to design a deep network which stacks CNN-based frame-level features in a fixed size and then conduct spatiotemporal convolutions for video-level features learning. However, the results which implied the difficulty of CNNs in capturing motion information of the video is not satisfied. Later, many works in this genre leverage ConvNets trained on frames to extract low-level features an then perform high-level temporal integration of those features using pooling @cite_28 @cite_30 , high-dimensional feature encoding @cite_26 @cite_21 , or recurrent neural networks @cite_23 @cite_22 @cite_3 @cite_32 . Recently, the CNN-LSTM frameworks @cite_23 @cite_22 , using stacked LSTM network to connect frame-level representation and exploring long-term temporal relationships of video for learning a more robust representation, have yielded an improvement for modeling temporal dynamics of convolution features in videos. However, this genre using CNN as an encoder and RNN as a decoder of the video will lose low-level temporal context which is essential for action recognition. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_19",
"@cite_40",
"@cite_23"
],
"mid": [
"2751445731",
"1714639292",
"2517503862",
"2563705555"
],
"abstract": [
"3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).",
"Generating natural language descriptions for in-the-wild videos is a challenging task. Most state-of-the-art methods for solving this problem borrow existing deep convolutional neural network (CNN) architectures (AlexNet, GoogLeNet) to extract a visual representation of the input video. However, these deep CNN architectures are designed for single-label centered-positioned object classification. While they generate strong semantic features, they have no inherent structure allowing them to detect multiple objects of different sizes and locations in the frame. Our paper tries to solve this problem by integrating the base CNN into several fully convolutional neural networks (FCNs) to form a multi-scale network that handles multiple receptive field sizes in the original image. FCNs, previously applied to image segmentation, can generate class heat-maps efficiently compared to sliding window mechanisms, and can easily handle multiple scales. To further handle the ambiguity over multiple objects and locations, we incorporate the Multiple Instance Learning mechanism (MIL) to consider objects in different positions and at different scales simultaneously. We integrate our multi-scale multi-instance architecture with a sequence-to-sequence recurrent neural network to generate sentence descriptions based on the visual representation. Ours is the first end-to-end trainable architecture that is capable of multi-scale region processing. Evaluation on a Youtube video dataset shows the advantage of our approach compared to the original single-scale whole frame CNN model. Our flexible and efficient architecture can potentially be extended to support other video processing tasks.",
"This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | These works implied the importance of temporal information for action recognition and the incapability of CNNs to capture such information. To exploiting the temporal information, some studies resort to the use of the 3D convolution kernel. @cite_12 @cite_19 apply 3D CNN, both appearance and motion features learned with 3D convolution, simultaneously encode spatial and temporal cues. Several works explored the effect of performing 3D convolutions over the long-range temporal structure with ConvNets @cite_1 @cite_24 . Unfortunately, the network accepts a predefined number of frames as the input, and it's unclear of the right choice of the temporal span. What's more, the 3D convolution kernel inevitably has more network parameters. Therefore, recent interests have proposed a variant of factorizing a 3D filter into a combination of a 2D and 1D filter, including R(2+1)D'' @cite_34 , Pseudo3D network'' @cite_4 , factorized spatiotemporal convolutional networks'' @cite_46 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_46",
"@cite_34",
"@cite_12"
],
"mid": [
"2761659801",
"2963820951",
"2772114784",
"2883429621"
],
"abstract": [
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51.",
"Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24)."
]
} |
1908.10136 | 2970136826 | Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results. | Another efficient way to extract temporal features is to precomputing the optical flow @cite_11 using traditional optical flow estimation methods and training a separate CNN to encode the precomputed optical flow, which is kind of escape from temporal modeling but effective in motion features extraction. The famous two-stream architecture @cite_44 proposed to apply two CNN architectures separately on visual frames and staked optical flows to extract spatiotemporal features and then fuse classification score. Further improvements base on this architecture including multi-granular structure @cite_14 @cite_13 , convolutional fusion @cite_25 @cite_1 , key-volume mining @cite_31 , temporal segment networks @cite_8 and ActionVLAD @cite_26 for video representation learning. Remarkably, a recent work (I3D) @cite_35 which combines two-stream processing and 3D convolutions holds the state-of-art action recognition results. The work reflects the power of ultra-deep architectures and pre-trained models. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_8",
"@cite_1",
"@cite_44",
"@cite_31",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2951845494",
"2517503862",
"2736596806",
"2798365843"
],
"abstract": [
"This paper shows how to extract dense optical flow from videos with a convolutional neural network (CNN). The proposed model constitutes a potential building block for deeper architectures to allow using motion without resorting to an external algorithm, for recognition in videos. We derive our network architecture from signal processing principles to provide desired invariances to image contrast, phase and texture. We constrain weights within the network to enforce strict rotation invariance and substantially reduce the number of parameters to learn. We demonstrate end-to-end training on only 8 sequences of the Middlebury dataset, orders of magnitude less than competing CNN-based motion estimation methods, and obtain comparable performance to classical methods on the Middlebury benchmark. Importantly, our method outputs a distributed representation of motion that allows representing multiple, transparent motions, and dynamic textures. Our contributions on network design and rotation invariance offer insights nonspecific to motion estimation.",
"This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets.",
"Two-stream convolutional networks have shown strong performance in video action recognition tasks. The key idea is to learn spatiotemporal features by fusing convolutional networks spatially and temporally. However, it remains unclear how to model the correlations between the spatial and temporal structures at multiple abstraction levels. First, the spatial stream tends to fail if two videos share similar backgrounds. Second, the temporal stream may be fooled if two actions resemble in short snippets, though appear to be distinct in the long term. We propose a novel spatiotemporal pyramid network to fuse the spatial and temporal features in a pyramid structure such that they can reinforce each other. From the architecture perspective, our network constitutes hierarchical fusion strategies which can be trained as a whole using a unified spatiotemporal loss. A series of ablation experiments support the importance of each fusion strategy. From the technical perspective, we introduce the spatiotemporal compact bilinear operator into video analysis tasks. This operator enables efficient training of bilinear fusion operations which can capture full interactions between the spatial and temporal features. Our final network achieves state-of-the-art results on standard video datasets.",
"Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization. Experimental results show that our approach achieves state-of-the-art on three publicly available fine-grained recognition datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and visualizations are provided to understand our approach."
]
} |
1908.10331 | 2953161934 | Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text—without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of ≥10 sentences. | Reinforcement Learning (RL) methods are typically based on value functions or policy search @cite_19 , which also applies to deep RL methods. While value functions have been particularly applied to task-oriented dialogue systems @cite_12 @cite_31 @cite_6 @cite_22 @cite_3 @cite_29 , policy search has been particularly applied to open-ended dialogue systems such as (chitchat) chatbots @cite_7 @cite_33 @cite_0 @cite_23 @cite_11 . This is not surprising given the fact that task-oriented dialogue systems use finite action sets, while chatbot systems use infinite action sets. So far there is a preference for policy search methods for chatbots, but it is not clear whether they should be preferred because they face problems such as local optima rather than global optima, inefficiency and high variance. It is thus that this paper explores the feasibility of value function-based methods for chatbots, which has not been explored before---at least not from the perspective of deriving the action sets automatically as attempted in this paper. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_12",
"@cite_11"
],
"mid": [
"2728821832",
"2410985346",
"1990671169",
"2111967991"
],
"abstract": [
"Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learn deep RL-based dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.",
"In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.",
"HighlightsWe integrate user appraisals in a POMDP-based dialogue manager procedure.We employ additional socially-inspired rewards in a RL setup to guide the learning.A unified framework for speeding up the policy optimisation and user adaptation.We consider a potential-based reward shaping with a sample efficient RL algorithm.Evaluated using both user simulator (information retrieval) and user trials (HRI). This paper investigates some conditions under which polarized user appraisals gathered throughout the course of a vocal interaction between a machine and a human can be integrated in a reinforcement learning-based dialogue manager. More specifically, we discuss how this information can be cast into socially-inspired rewards for speeding up the policy optimisation for both efficient task completion and user adaptation in an online learning setting. For this purpose a potential-based reward shaping method is combined with a sample efficient reinforcement learning algorithm to offer a principled framework to cope with these potentially noisy interim rewards. The proposed scheme will greatly facilitate the system's development by allowing the designer to teach his system through explicit positive negative feedbacks given as hints about task progress, in the early stage of training. At a later stage, the approach will be used as a way to ease the adaptation of the dialogue policy to specific user profiles. Experiments carried out using a state-of-the-art goal-oriented dialogue management framework, the Hidden Information State (HIS), support our claims in two configurations: firstly, with a user simulator in the tourist information domain (and thus simulated appraisals), and secondly, in the context of man-robot dialogue with real user trials.",
"Many reinforcement learning (RL) tasks, especially in robotics, consist of multiple sub-tasks that are strongly structured. Such task structures can be exploited by incorporating hierarchical policies that consist of gating networks and sub-policies. However, this concept has only been partially explored for real world settings and complete methods, derived from first principles, are needed. Real world settings are challenging due to large and continuous state-action spaces that are prohibitive for exhaustive sampling methods. We define the problem of learning sub-policies in continuous state action spaces as finding a hierarchical policy that is composed of a high-level gating policy to select the low-level sub-policies for execution by the agent. In order to efficiently share experience with all sub-policies, also called inter-policy learning, we treat these sub-policies as latent variables which allows for distribution of the update information between the sub-policies. We present three different variants of our algorithm, designed to be suitable for a wide variety of real world robot learning tasks and evaluate our algorithms in two real robot learning scenarios as well as several simulations and comparisons."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | The outliers we are interested in this work are due to outliers caused by extreme events. Another related problem considers methods to detect outliers caused by data measurement errors, such as sensor malfunction, malicious tampering, or measurement error @cite_13 @cite_2 @cite_38 . The latter methods can be seen as a part of a standard data cleaning or data pre-processing step. On the other hand, outliers caused by extreme traffic have valuable information for congestion management, and can provide agencies with insights into the performance of urban network. The works @cite_1 @cite_35 @cite_21 explore the problem of outlier detection caused by events, while the works @cite_6 @cite_3 @cite_50 @cite_4 focus on determining the root causes of the outlier. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_50",
"@cite_2",
"@cite_13"
],
"mid": [
"2117618130",
"2088400370",
"2083797062",
"2612114597"
],
"abstract": [
"The detection of outliers in spatio-temporal traffic data is an important research problem in the data mining and knowledge discovery community. However to the best of our knowledge, the discovery of relationships, especially causal interactions, among detected traffic outliers has not been investigated before. In this paper we propose algorithms which construct outlier causality trees based on temporal and spatial properties of detected outliers. Frequent substructures of these causality trees reveal not only recurring interactions among spatio-temporal outliers, but potential flaws in the design of existing traffic networks. The effectiveness and strength of our algorithms are validated by experiments on a very large volume of real taxi trajectories in an urban road network.",
"In order to improve the veracity and reliability of a traffic model built, or to extract important and valuable information from collected traffic data, the technique of outlier mining has been introduced into the traffic engineering domain for detecting and analyzing the outliers in traffic data sets. Three typical outlier algorithms, respectively the statistics-based approach, the distance-based approach, and the density-based local outlier approach, are described with respect to the principle, the characteristics and the time complexity of the algorithms. A comparison among the three algorithms is made through application to intelligent transportation systems (ITS). Two traffic data sets with different dimensions have been used in our experiments carried out, one is travel time data, and the other is traffic flow data. We conducted a number of experiments to recognize outliers hidden in the data sets before building the travel time prediction model and the traffic flow foundation diagram. In addition, some artificial generated outliers are introduced into the traffic flow data to see how well the different algorithms detect them. Three strategies-based on ensemble learning, partition and average LOF have been proposed to develop a better outlier recognizer. The experimental results reveal that these methods of outlier mining are feasible and valid to detect outliers in traffic data sets, and have a good potential for use in the domain of traffic engineering. The comparison and analysis presented in this paper are expected to provide some insights to practitioners who plan to use outlier mining for ITS data.",
"Traffic volume data is already collected and used for a variety of purposes in intelligent transportation system (ITS). However, the collected data might be abnormal due to the problem of outlier data caused by malfunctions in data collection and record systems. To fully analyze and operate the collected data, it is necessary to develop a validate method for addressing the outlier data. Many existing algorithms have studied the problem of outlier recovery based on the time series methods. In this paper, a multiway tensor model is proposed for constructing the traffic volume data based on the intrinsic multilinear correlations, such as day to day and hour to hour. Then, a novel tensor recovery method, called ADMM-TR, is proposed for recovering outlier data of traffic volume data. The proposed method is evaluated on synthetic data and real world traffic volume data. Experimental results demonstrate the practicability, effectiveness, and advantage of the proposed method, especially for the real world traffic volume data.",
"Cross-scene regression tasks, such as congestion level detection and crowd counting, are useful but challenging. There are two main problems, which limit the performance of existing algorithms. The first one is that no appropriate congestion-related feature can reflect the real density in scenes. Though deep learning has been proved to be capable of extracting high level semantic representations, it is hard to converge on regression tasks, since the label is too weak to guide the learning of parameters in practice. Thus, many approaches utilize additional information, such as a density map, to guide the learning, which increases the effort of labeling. Another problem is that most existing methods are composed of several steps, for example, feature extraction and regression. Since the steps in the pipeline are separated, these methods face the problem of complex optimization. To remedy it, a deep metric learning-based regression method is proposed to extract density related features, and learn better distance measurement simultaneously. The proposed networks trained end-to-end for better optimization can be used for crowdedness regression tasks, including congestion level detection and crowd counting. Extensive experiments confirm the effectiveness of the proposed method."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | Low rank matrix and tensor learning has been widely used to utilize the inner structure of the data. Various application have benefited from matrix and tensor based methods, including data completion @cite_43 @cite_47 , link prediction @cite_33 , network structure clustering @cite_8 , etc. | {
"cite_N": [
"@cite_43",
"@cite_47",
"@cite_33",
"@cite_8"
],
"mid": [
"2014237985",
"2744847173",
"2084983808",
"2951021721"
],
"abstract": [
"The low-rank matrix completion problem is a fundamental machine learning problem with many important applications. The standard low-rank matrix completion methods relax the rank minimization problem by the trace norm minimization. However, this relaxation may make the solution seriously deviate from the original solution. Meanwhile, most completion methods minimize the squared prediction errors on the observed entries, which is sensitive to outliers. In this paper, we propose a new robust matrix completion method to address these two problems. The joint Schatten @math -norm and @math -norm are used to better approximate the rank minimization problem and enhance the robustness to outliers. The extensive experiments are performed on both synthetic data and real world applications in collaborative filtering and social network link prediction. All empirical results show our new method outperforms the standard matrix completion methods.",
"Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, however, this approach is confronted with two challenging problems, namely (1) tensor rank determination; (2) handling real tensor data which only approximately fulfils the low-rank requirement. To address these two issues, we develop a data-adaptive tensor completion model which explicitly represents both the low-rank and non-low-rank structures in a latent tensor. Representing the non-low-rank structure separately from the low-rank one allows priors which capture the important distinctions between the two, thus enabling more accurate modelling, and ultimately, completion. Through defining a new tensor rank, we develop a sparsity induced prior for the low-rank structure, with which the tensor rank can be automatically determined. The prior for the non-low-rank structure is established based on a mixture of Gaussians which is shown to be flexible enough, and powerful enough, to inform the completion process for a variety of real tensor data. With these two priors, we develop a Bayesian minimum mean squared error estimate (MMSE) framework for inference which provides the posterior mean of missing entries as well as their uncertainty. Compared with the state-of-the-art methods in various applications, the proposed model produces more accurate completion results.",
"The low-rank matrix completion problem is a fundamental machine learning and data mining problem with many important applications. The standard low-rank matrix completion methods relax the rank minimization problem by the trace norm minimization. However, this relaxation may make the solution seriously deviate from the original solution. Meanwhile, most completion methods minimize the squared prediction errors on the observed entries, which is sensitive to outliers. In this paper, we propose a new robust matrix completion method to address these two problems. The joint Schatten @math p -norm and @math l p -norm are used to better approximate the rank minimization problem and enhance the robustness to outliers. The extensive experiments are performed on both synthetic data and real-world applications in collaborative filtering prediction and social network link recovery. All empirical results show that our new method outperforms the standard matrix completion methods.",
"Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a @math -way tensor of length @math and Tucker rank @math from Gaussian measurements requires @math observations. In contrast, a certain (intractable) nonconvex formulation needs only @math observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with @math observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. @math , nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | The most relevant works with ours are robust matrix and tensor PCA for outlier detection. @math norm regularized robust tensor recovery, as proposed by Goldfarb and Qin @cite_46 , is useful when data is polluted with unstructured random noises. @cite_5 also used @math norm regularized tensor decomposition for traffic data recovery, in face of random noise corruption. But if outliers are structured, for example grouped in columns, @math norm regularization does not yield good results. In addition, although traffic is also modeled in tensor format in @cite_5 , only a single road segment is considered, not taking into account network spacial structures. | {
"cite_N": [
"@cite_5",
"@cite_46"
],
"mid": [
"2030927653",
"2953204310",
"2435918055",
"1521197246"
],
"abstract": [
"In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].",
"In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].",
"This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA ( 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor @math such that @math , where @math has low tubal rank and @math is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the @math -norm, i.e., @math , where @math . Interestingly, TRPCA involves RPCA as a special case when @math and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method.",
"We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order- @math tensor @math of the form @math , where @math is a signal-to-noise ratio, @math is a unit vector, and @math is a random noise tensor, the goal is to recover the planted vector @math . For the case that @math has iid standard Gaussian entries, we give an efficient algorithm to recover @math whenever @math , and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of @math . The previous best algorithms with provable guarantees required @math . In the regime @math , natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree- @math sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree- @math sum-of-squares relaxations break down for @math , which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings."
]
} |
1908.10198 | 2970102015 | Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing @math norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can exactly detect outliers even with missing data rates as high as 40 , conditioned on the outlier corruption rate and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN, USA and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. | In face of large events, outliers tend to group in columns or fibers in the dataset, as illustrated in section . @math norm regularized decomposition is suitable for group outlier detection, as shown in @cite_25 @cite_51 for matrices, and @cite_27 @cite_41 for tensors. In addition, @cite_9 introduced a multi-view low-rank analysis framework for outlier detection, and @cite_15 used discriminant tensor factorization for event analytics. Our methods differ from the existing tensor outliers pursuit @cite_27 @cite_41 in that they are dealing with slab outliers, i.e., outliers form an entire slice instead of fibers of the tensor. Moreover, compared with existing works, we take one step further and deal with partial observations. As stated in Section , without an overall understanding of the underlying pattern, we can easily impute the missing entries incorrectly and influence our decision about outliers. We will show in Section simulation that our new algorithm can exactly detect the outliers even with 40 | {
"cite_N": [
"@cite_41",
"@cite_9",
"@cite_27",
"@cite_15",
"@cite_51",
"@cite_25"
],
"mid": [
"2091449379",
"1825959699",
"2613951549",
"2112292531"
],
"abstract": [
"In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC and HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.",
"Many problems in computational neuroscience, neuroinformatics, pattern image recognition, signal processing and machine learning generate massive amounts of multidimensional data with multiple aspects and high dimensionality. Tensors (i.e., multi-way arrays) provide often a natural and compact representation for such massive multidimensional data via suitable low-rank approximations. Big data analytics require novel technologies to efficiently process huge datasets within tolerable elapsed times. Such a new emerging technology for multidimensional big data is a multiway analysis via tensor networks (TNs) and tensor decompositions (TDs) which represent tensors by sets of factor (component) matrices and lower-order (core) tensors. Dynamic tensor analysis allows us to discover meaningful hidden structures of complex data and to perform generalizations by capturing multi-linear and multi-aspect relationships. We will discuss some fundamental TN models, their mathematical and graphical descriptions and associated learning algorithms for large-scale TDs and TNs, with many potential applications including: Anomaly detection, feature extraction, classification, cluster analysis, data fusion and integration, pattern recognition, predictive modeling, regression, time series analysis and multiway component analysis. Keywords: Large-scale HOSVD, Tensor decompositions, CPD, Tucker models, Hierarchical Tucker (HT) decomposition, low-rank tensor approximations (LRA), Tensorization Quantization, tensor train (TT QTT) - Matrix Product States (MPS), Matrix Product Operator (MPO), DMRG, Strong Kronecker Product (SKP).",
"Abstract Tensors are valuable tools to represent Electroencephalogram (EEG) data. Tucker decomposition is the most used tensor decomposition in multidimensional discriminant analysis and tensor extension of Linear Discriminant Analysis (LDA), called Higher Order Discriminant Analysis (HODA), is a popular tensor discriminant method used for analyzing Event Related Potentials (ERP). In this paper, we introduce a new tensor-based feature reduction technique, named Higher Order Spectral Regression Discriminant Analysis (HOSRDA), for use in a classification framework for ERP detection. The proposed method (HOSRDA) is a tensor extension of Spectral Regression Discriminant Analysis (SRDA) and casts the eigenproblem of HODA to a regression problem. The formulation of HOSRDA can open a new framework for adding different regularization constraints in higher order feature reduction problem. Additionally, when the dimension and number of samples is very large, the regression problem can be solved via efficient iterative algorithms. We applied HOSRDA on data of a P300 speller from BCI competition III and reached average character detection accuracy of 96.5 for the two subjects. HOSRDA outperforms almost all of other reported methods on this dataset. Additionally, the results of our method are fairly comparable with those of other methods when 5 and 10 repetitions are used in the P300 speller paradigm.",
"We present a scalable Bayesian framework for low-rank decomposition of multiway tensor data with missing observations. The key issue of pre-specifying the rank of the decomposition is sidestepped in a principled manner using a multiplicative gamma process prior. Both continuous and binary data can be analyzed under the framework, in a coherent way using fully conjugate Bayesian analysis. In particular, the analysis in the non-conjugate binary case is facilitated via the use of the Polya-Gamma sampling strategy which elicits closed-form Gibbs sampling updates. The resulting samplers are efficient and enable us to apply our framework to large-scale problems, with time-complexity that is linear in the number of observed entries in the tensor. This is especially attractive in analyzing very large but sparsely observed tensors with very few known entries. Moreover, our method admits easy extension to the supervised setting where entities in one or more tensor modes have labels. Our method outperforms several state-of-the-art tensor decomposition methods on various synthetic and benchmark real-world datasets."
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | Query expansion has a long history of literature in the field of information retrieval. It was first coined by @cite_29 in the 1960s for literature indexing and searching in a mechanized library system. In 1971, Rocchio @cite_21 brought QE to spotlight through the relevance feedback method and its characterization in a vector space model. While this was the first use of relevance feedback method, Rocchio's method is still used for QE in its original and modified forms. The availability of several standard text collections (e.g., Text Retrieval Conference (TREC) http: trec.nist.gov , and Forum for Information Retrieval Evaluation (FIRE) http: fire.irsi.res.in ) and IR platforms (e.g., Terrier http: terrier.org and Apache Lucene http: lucene.apache.org ) have been very instrumental in evaluating the progress in this area in a systematic way. Carpineto and Romano @cite_22 and Azad and Deepak @cite_2 present state-of-the-art comprehensive surveys on QE. This article focuses on web based QE techniques. | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_22",
"@cite_2"
],
"mid": [
"2102563107",
"2963764152",
"2531645065",
"1898200041"
],
"abstract": [
"Pseudo-relevance feedback (PRF) via query-expansion has been proven to be e®ective in many information retrieval (IR) tasks. In most existing work, the top-ranked documents from an initial search are assumed to be relevant and used for PRF. One problem with this approach is that one or more of the top retrieved documents may be non-relevant, which can introduce noise into the feedback process. Besides, existing methods generally do not take into account the significantly different types of queries that are often entered into an IR system. Intuitively, Wikipedia can be seen as a large, manually edited document collection which could be exploited to improve document retrieval effectiveness within PRF. It is not obvious how we might best utilize information from Wikipedia in PRF, and to date, the potential of Wikipedia for this task has been largely unexplored. In our work, we present a systematic exploration of the utilization of Wikipedia in PRF for query dependent expansion. Specifically, we classify TREC topics into three categories based on Wikipedia: 1) entity queries, 2) ambiguous queries, and 3) broader queries. We propose and study the effectiveness of three methods for expansion term selection, each modeling the Wikipedia based pseudo-relevance information from a different perspective. We incorporate the expansion terms into the original query and use language modeling IR to evaluate these methods. Experiments on four TREC test collections, including the large web collection GOV2, show that retrieval performance of each type of query can be improved. In addition, we demonstrate that the proposed method out-performs the baseline relevance model in terms of precision and robustness.",
"Abstract With the ever increasing size of the web, relevant information extraction on the Internet with a query formed by a few keywords has become a big challenge. Query Expansion (QE) plays a crucial role in improving searches on the Internet. Here, the user’s initial query is reformulated by adding additional meaningful terms with similar significance. QE – as part of information retrieval (IR) – has long attracted researchers’ attention. It has become very influential in the field of personalized social document, question answering, cross-language IR, information filtering and multimedia IR. Research in QE has gained further prominence because of IR dedicated conferences such as TREC (Text Information Retrieval Conference) and CLEF (Conference and Labs of the Evaluation Forum). This paper surveys QE techniques in IR from 1960 to 2017 with respect to core techniques, data sources used, weighting and ranking methodologies, user participation and applications – bringing out similarities and differences.",
"ABSTRACTQuery expansion is a well-known method for improving the performance of information retrieval systems. Pseudo-relevance feedback (PRF)-based query expansion is a type of query expansion approach that assumes the top-ranked retrieved documents are relevant. The addition of all the terms of PRF documents is not important or appropriate for expanding the original user query. Hence, the selection of proper expansion term is very important for improving retrieval system performance. Various individual query expansion term selection methods have been widely investigated for improving system performance. Every individual expansion term selection method has its own weaknesses and strengths. In order to minimize the weaknesses and utilizing the strengths of the individual method, we used multiple terms selection methods together. First, this paper explored the possibility of improving overall system performance by using individual query expansion terms selection methods. Further, ranks-aggregating method n...",
"Automatic query expansion may be used in document retrieval to improve search effectiveness. Traditional query expansion methods are based on the document collection itself. For example, pseudo-relevance feedback (PRF) assumes that the top retrieved documents are relevant, and uses the terms extracted from those documents for query expansion. However, there are other sources of evidence that can be used for expansion, some of which may give better search results with greater efficiency at query time. In this paper, we use the external evidence, especially the hints obtained from external web search engines to expand the original query. We explore 6 different methods using search engine query log, snippets and search result documents. We conduct extensive experiments, with state of the art PRF baselines and careful parameter tuning, on three TREC collections: AP, WT10g, GOV2. Log-based methods do not show consistent significant gains, despite being very efficient at query-time. Snippet-based expansion, using the summaries provided by an external search engine, provides significant effectiveness gains with good efficiency at query-time."
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | Based on web search query logs, two types of QE approaches are usually used. The first type extract features from the queries, stored in logs, that are related to the user's original query, with or without making use of their respective retrieval results @cite_43 @cite_20 . In techniques based on the first approach, some use their combined retrieval results @cite_41 , while some do not (e.g., @cite_43 @cite_20 ). In the second type of approach, the features are extracted on relational behavior of queries and retrieval results. For example, @cite_26 represent queries in a graph based vector space model (query-click bipartite graph) and analyze the graph constructed using the query logs. Under the second approach, the expansion terms are extracted form several approaches: through user clicks @cite_13 @cite_20 @cite_36 , directly from the clicked results @cite_9 @cite_44 @cite_40 , the top results from the past query terms entered by the user @cite_10 @cite_19 , and queries related with the same documents @cite_4 @cite_6 . However, the second type of approach is more widely used and has been shown to provide better results. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_41",
"@cite_36",
"@cite_9",
"@cite_10",
"@cite_6",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2122901787",
"1990387666",
"2066806792",
"1548134621"
],
"abstract": [
"We consider learning query and document similarities from a click-through bipartite graph with metadata on the nodes. The metadata contains multiple types of features of queries and documents. We aim to leverage both the click-through bipartite graph and the features to learn query-document, document-document, and query-query similarities. The challenges include how to model and learn the similarity functions based on the graph data. We propose solving the problems in a principled way. Specifically, we use two different linear mappings to project the queries and documents in two different feature spaces into the same latent space, and take the dot product in the latent space as their similarity. Query-query and document-document similarities can also be naturally defined as dot products in the latent space. We formalize the learning of similarity functions as learning of the mappings that maximize the similarities of the observed query-document pairs on the enriched click-through bipartite graph. When queries and documents have multiple types of features, the similarity function is defined as a linear combination of multiple similarity functions, each based on one type of features. We further solve the learning problem by using a new technique called Multi-view Partial Least Squares (M-PLS). The advantages include the global optimum which can be obtained through Singular Value Decomposition (SVD) and the capability of finding high quality similar queries. We conducted large scale experiments on enterprise search data and web search data. The experimental results on relevance ranking and similar query finding demonstrate that the proposed method works significantly better than the baseline methods.",
"This paper proposes an effective term suggestion approach to interactive Web search. Conventional approaches to making term suggestions involve extracting co-occurring keyterms from highly ranked retrieved documents. Such approaches must deal with term extraction difficulties and interference from irrelevant documents, and, more importantly, have difficulty extracting terms that are conceptually related but do not frequently co-occur in documents. In this paper, we present a new, effective log-based approach to relevant term extraction and term suggestion. Using this approach, the relevant terms suggested for a user query are those that co-occur in similar query sessions from search engine logs, rather than in the retrieved documents. In addition, the suggested terms in each interactive search step can be organized according to its relevance to the entire query session, rather than to the most recent single query as in conventional approaches. The proposed approach was tested using a proxy server log containing about two million query transactions submitted to search engines in Taiwan. The obtained experimental results show that the proposed approach can provide organized and highly relevant terms, and can exploit the contextual information in a user's query session to make more effective suggestions.",
"We present the design of a structured search engine which returns a multi-column table in response to a query consisting of keywords describing each of its columns. We answer such queries by exploiting the millions of tables on the Web because these are much richer sources of structured knowledge than free-format text. However, a corpus of tables harvested from arbitrary HTML web pages presents huge challenges of diversity and redundancy not seen in centrally edited knowledge bases. We concentrate on one concrete task in this paper. Given a set of Web tables T1,..., Tn, and a query Q with q sets of keywords Q1,..., Qq, decide for each Ti if it is relevant to Q and if so, identify the mapping between the columns of Ti and query columns. We represent this task as a graphical model that jointly maps all tables by incorporating diverse sources of clues spanning matches in different parts of the table, corpus-wide co-occurrence statistics, and content overlap across table columns. We define a novel query segmentation model for matching keywords to table columns, and a robust mechanism of exploiting content overlap across table columns. We design efficient inference algorithms based on bipartite matching and constrained graph cuts to solve the joint labeling task. Experiments on a workload of 59 queries over a 25 million web table corpus shows significant boost in accuracy over baseline IR methods.",
"We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-"
]
} |
1908.10193 | 2970619785 | In the field of information retrieval, query expansion (QE) has long been used as a technique to deal with the fundamental issue of word mismatch between a user's query and the target information. In the context of the relationship between the query and expanded terms, existing weighting techniques often fail to appropriately capture the term-term relationship and term to the whole query relationship, resulting in low retrieval effectiveness. Our proposed QE approach addresses this by proposing three weighting models based on (1) tf-itf, (2) k-nearest neighbor (kNN) based cosine similarity, and (3) correlation score. Further, to extract the initial set of expanded terms, we use pseudo-relevant web knowledge consisting of the top N web pages returned by the three popular search engines namely, Google, Bing, and DuckDuckGo, in response to the original query. Among the three weighting models, tf-itf scores each of the individual terms obtained from the web content, kNN-based cosine similarity scores the expansion terms to obtain the term-term relationship, and correlation score weighs the selected expansion terms with respect to the whole query. The proposed model, called web knowledge based query expansion (WKQE), achieves an improvement of 25.89 on the MAP score and 30.83 on the GMAP score over the unexpanded queries on the FIRE dataset. A comparative analysis of the WKQE techniques with other related approaches clearly shows significant improvement in the retrieval performance. We have also analyzed the effect of varying the number of pseudo-relevant documents and expansion terms on the retrieval effectiveness of the proposed model. | In the context of web-based knowledge, anchor texts can play a role similar to the user's search queries because an anchor text to a page can serve as a brief summary of its content. Anchor texts were first used by McBryan @cite_28 for associating hyperlinks with linked pages as well as with the pages in which the anchor texts are found. Kraft and Zien @cite_56 also used anchor texts for QE; their experimental results suggest that anchor texts can be used to improve traditional QE based on query logs. Similarly, Dang and Croft @cite_3 suggested that anchor text could be an effective alternative to query logs. They demonstrated the effectiveness of QE techniques using log-based stemming through experiments on standard TREC collection dataset. | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_56"
],
"mid": [
"155984473",
"2171161922",
"2080825533",
"1493108551"
],
"abstract": [
"In the Navigational Retrieval Subtask 2 (Navi-2) at the NTCIR-5 WEB Task, a hypothetical user knows a specific item (e.g., a product, company, and person) and requires to find one or more representative Web pages related to the item. This paper describes our system participated in the Navi-2 subtask and reports the evaluation results of our system. Our system uses three types of information obtained from the NTCIR5 Web collection: page content, anchor text, and link structure. Specifically, we exploit anchor text in two perspectives. First, we compare the effectiveness of two different methods to model anchor text. Second, we use anchor text to extract synonyms for query expansion purposes. We show the effectiveness of our system experimentally.",
"When searching large hypertext document collections, it is often possible that there are too many results available for ambiguous queries. Query refinement is an interactive process of query modification that can be used to narrow down the scope of search results. We propose a new method for automatically generating refinements or related terms to queries by mining anchor text for a large hypertext document collection. We show that the usage of anchor text as a basis for query refinement produces high quality refinement suggestions that are significantly better in terms of perceived usefulness compared to refinements that are derived using the document content. Furthermore, our study suggests that anchor text refinements can also be used to augment traditional query refinement algorithms based on query logs, since they typically differ in coverage and produce different refinements. Our results are based on experiments on an anchor text collection of a large corporate intranet.",
"Query reformulation techniques based on query logs have been studied as a method of capturing user intent and improving retrieval effectiveness. The evaluation of these techniques has primarily, however, focused on proprietary query logs and selected samples of queries. In this paper, we suggest that anchor text, which is readily available, can be an effective substitute for a query log and study the effectiveness of a range of query reformulation techniques (including log-based stemming, substitution, and expansion) using standard TREC collections. Our results show that log-based query reformulation techniques are indeed effective with standard collections, but expansion is a much safer form of query modification than word substitution. We also show that using anchor text as a simulated query log is as least as effective as a real log for these techniques.",
"This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection."
]
} |
1908.09775 | 2969344335 | Despite the remarkable success of deep learning in pattern recognition, deep network models face the problem of training a large number of parameters. In this paper, we propose and evaluate a novel multi-path wavelet neural network architecture for image classification with far less number of trainable parameters. The model architecture consists of a multi-path layout with several levels of wavelet decompositions performed in parallel followed by fully connected layers. These decomposition operations comprise wavelet neurons with learnable parameters, which are updated during the training phase using the back-propagation algorithm. We evaluate the performance of the introduced network using common image datasets without data augmentation except for SVHN and compare the results with influential deep learning models. Our findings support the possibility of reducing the number of parameters significantly in deep neural networks without compromising its accuracy. | The wavelet transform is a powerful tool for processing data and developing time-frequency representations. A thorough theoretical background on wavelets is explained in @cite_41 @cite_25 . Applying wavelet transform in the context of neural networks is not novel. Earlier work @cite_40 @cite_27 has presented a theoretical approach for wavelet-based feed-forward neural networks. The ability to use wavelet based interpolation for real time unknown function approximation has been researched by Bernard @cite_24 . In this case, the results have been achieved with a relatively less number of coefficients due to the high compression ability of the wavelets. The work by Alexandridis @cite_33 has proposed a statistical model identification framework in applying wavelet networks and it is investigated under many subjects including architecture, initialization, variable and model selection. Literature indicates applications of wavelet based neural networks in many different fields such as signal classification and compression @cite_11 @cite_28 @cite_8 , in time series predicting @cite_35 @cite_19 @cite_44 , electrical load forecasting @cite_45 @cite_5 and power distribution recognition @cite_10 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_28",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_44",
"@cite_45",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2288990565",
"2094887027",
"2741196023",
"2115340664"
],
"abstract": [
"We consider a wavelet neural network approach for electricity load prediction. The wavelet transform is used to decompose the load into different frequency components that are predicted separately using neural networks. We firstly propose a new approach for signal extension which minimizes the border distortion when decomposing the data, outperforming three standard methods. We also compare the performance of the standard wavelet transform, which is shift variant, with a non-decimated transform, which is shift invariant. Our results show that the use of shift invariant transform considerably improves the prediction accuracy. In addition to wavelet neural network, we also present the results of wavelet linear regression, wavelet model trees and a number of baselines. Our evaluation uses two years of Australian electricity data.",
"We propose a wavelet multiscale decomposition-based autoregressive approach for the prediction of 1-h ahead load based on historical electricity load data. This approach is based on a multiple resolution decomposition of the signal using the non-decimated or redundant Haar a trous wavelet transform whose advantage is taking into account the asymmetric nature of the time-varying data. There is an additional computational advantage in that there is no need to recompute the wavelet transform (wavelet coefficients) of the full signal if the electricity data (time series) is regularly updated. We assess results produced by this multiscale autoregressive (MAR) method, in both linear and non-linear variants, with single resolution autoregression (AR), multilayer perceptron (MLP), Elman recurrent neural network (ERN) and the general regression neural network (GRNN) models. Results are based on the New South Wales (Australia) electricity load data that is provided by the National Electricity Market Management Company (NEMMCO).",
"Recent advances have seen a surge of deep learning approaches for image super-resolution. Invariably, a network, e.g. a deep convolutional neural network (CNN) or auto-encoder is trained to learn the relationship between low and high-resolution image patches. Recognizing that a wavelet transform provides a \"coarse\" as well as \"detail\" separation of image content, we design a deep CNN to predict the \"missing details\" of wavelet coefficients of the low-resolution images to obtain the Super-Resolution (SR) results, which we name Deep Wavelet Super-Resolution (DWSR). Out network is trained in the wavelet domain with four input and output channels respectively. The input comprises of 4 sub-bands of the low-resolution wavelet coefficients and outputs are residuals (missing details) of 4 sub-bands of high-resolution wavelet coefficients. Wavelet coefficients and wavelet residuals are used as input and outputs of our network to further enhance the sparsity of activation maps. A key benefit of such a design is that it greatly reduces the training burden of learning the network that reconstructs low frequency details. The output prediction is added to the input to form the final SR wavelet coefficients. Then the inverse 2d discrete wavelet transformation is applied to transform the predicted details and generate the SR results. We show that DWSR is computationally simpler and yet produces competitive and often better results than state-of-the-art alternatives.",
"The wavelet transform has emerged over recent years as a powerful time–frequency analysis and signal coding tool favoured for the interrogation of complex nonstationary signals. Its application to biosignal processing has been at the forefront of these developments where it has been found particularly useful in the study of these, often problematic, signals: none more so than the ECG. In this review, the emerging role of the wavelet transform in the interrogation of the ECG is discussed in detail, where both the continuous and the discrete transform are considered in turn."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | The connectivity (respectively, @math -connectivity) of wireless sensor networks secured by the classical scheme under a uniform on off channel model was investigated in @cite_33 (respectively, @cite_11 ). The network was modeled by a composite random graph formed by the intersection of random key graphs @math (induced by scheme) with graphs @math (induced by the uniform on-off channel model). Our paper generalizes this model to heterogeneous setting where different nodes could be given different number of keys depending on their respective classes and the availability of a wireless channel between two nodes depends on their respective classes. Hence, our model highly resembles emerging wireless sensor networks which are essentially complex and heterogeneous. | {
"cite_N": [
"@cite_33",
"@cite_11"
],
"mid": [
"2963495066",
"2555492793",
"2196704028",
"2512707330"
],
"abstract": [
"We investigate the connectivity of a wireless sensor network (WSN) secured by the heterogeneous key predistribution scheme under an independent on off channel model. The heterogeneous scheme induces an inhomogeneous random key graph, denoted by @math and the on off channel model induces an Erdős-Renyi graph, denoted by @math . Hence, the overall random graph modeling the WSN is obtained by the intersection of @math and @math . We present conditions on how to scale the parameters of the intersecting graph with respect to the network size @math such that the graph i) has no isolated nodes and ii) is connected, both with high probability (whp) as the number of nodes gets large. Our results are supported by a simulation study demonstrating that i) despite their asymptotic nature, our results can in fact be useful in designing finite -node WSNs so that they achieve secure connectivity whp; and ii) despite the simplicity of the on off communication model, the probability of connectivity in the resulting WSN approximates very well the case where the disk model is used.",
"We consider secure and reliable connectivity in wireless sensor networks that utilize the heterogeneous random key predistribution scheme. We model the unreliability of wireless links by an on off channel model that induces an Erdős-Renyi graph, while the heterogeneous scheme induces an inhomogeneous random key graph. The overall network can thus be modeled by the intersection of both graphs. We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model, so that with high probability: i) all of its nodes are connected to at least @math other nodes, i.e., the minimum node degree of the graph is no less than @math , and ii) the graph is @math -connected, i.e., the graph remains connected even if any @math nodes leave the network. These results are shown to complement and generalize several previous results in the literature. We also present numerical results to support our findings in the finite-node regime. Finally, we demonstrate via simulations that our results are also useful when the on off channel model is replaced with the more realistic disk communication model .",
"We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph.",
"We consider wireless sensor networks under a heterogeneous random key predistribution scheme and an on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yagan - as an extension to the Eschenauer and Gligor scheme - for the cases when the network consists of sensor nodes with varying level of resources and or connectivity requirements, e.g., regular nodes vs. cluster heads. The network is modeled by the intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Renyi graph (induced by the on off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support our results in the finite-node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring the minimum node degree being no less than k."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | In @cite_7 , Ya g an considered the connectivity of wireless sensor networks secured by the heterogeneous random key predistribution scheme under the full visibility assumption, i.e., all wireless channels are available and reliable, hence the only condition for two nodes to be adjacent is to share a key. It is clear that the full visibility assumption is not likely to hold in most practical deployments of wireless sensor networks as the wireless medium is typically unreliable. Our paper extends the results given in @cite_7 to more practical scenarios where the wireless connectivity is taken into account through the heterogeneous on-off channel model. In fact, by setting @math for @math and each @math (i.e., by assuming that all wireless channels are on ), our results reduce to those given in @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2512707330",
"2555492793",
"2963495066",
"2165958223"
],
"abstract": [
"We consider wireless sensor networks under a heterogeneous random key predistribution scheme and an on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yagan - as an extension to the Eschenauer and Gligor scheme - for the cases when the network consists of sensor nodes with varying level of resources and or connectivity requirements, e.g., regular nodes vs. cluster heads. The network is modeled by the intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Renyi graph (induced by the on off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support our results in the finite-node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring the minimum node degree being no less than k.",
"We consider secure and reliable connectivity in wireless sensor networks that utilize the heterogeneous random key predistribution scheme. We model the unreliability of wireless links by an on off channel model that induces an Erdős-Renyi graph, while the heterogeneous scheme induces an inhomogeneous random key graph. The overall network can thus be modeled by the intersection of both graphs. We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model, so that with high probability: i) all of its nodes are connected to at least @math other nodes, i.e., the minimum node degree of the graph is no less than @math , and ii) the graph is @math -connected, i.e., the graph remains connected even if any @math nodes leave the network. These results are shown to complement and generalize several previous results in the literature. We also present numerical results to support our findings in the finite-node regime. Finally, we demonstrate via simulations that our results are also useful when the on off channel model is replaced with the more realistic disk communication model .",
"We investigate the connectivity of a wireless sensor network (WSN) secured by the heterogeneous key predistribution scheme under an independent on off channel model. The heterogeneous scheme induces an inhomogeneous random key graph, denoted by @math and the on off channel model induces an Erdős-Renyi graph, denoted by @math . Hence, the overall random graph modeling the WSN is obtained by the intersection of @math and @math . We present conditions on how to scale the parameters of the intersecting graph with respect to the network size @math such that the graph i) has no isolated nodes and ii) is connected, both with high probability (whp) as the number of nodes gets large. Our results are supported by a simulation study demonstrating that i) despite their asymptotic nature, our results can in fact be useful in designing finite -node WSNs so that they achieve secure connectivity whp; and ii) despite the simplicity of the on off communication model, the probability of connectivity in the resulting WSN approximates very well the case where the disk model is used.",
"We investigate the secure connectivity of wireless sensor networks under the random key distribution scheme of Eschenauer and Gligor. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on off channels. We present conditions on how to scale the model parameters so that the network: 1) has no secure node which is isolated and 2) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of full zero-one laws, and constitute the first complete analysis of the EG scheme under non-full visibility. Through simulations, these zero-one laws are shown to be valid also under a more realistic communication model (i.e., the disk model). The relations to the Gupta and Kumar's conjecture on the connectivity of geometric random graphs with randomly deleted edges are also discussed."
]
} |
1908.09826 | 2969846162 | In this paper, we investigate the secure connectivity of wireless sensor networks utilizing the heterogeneous random key predistribution scheme, where each sensor node is classified as class- @math with probability @math for @math with @math and @math . A class- @math sensor is given @math cryptographic keys selected uniformly at random from a key pool of size @math . After deployment, two nodes can communicate securely if they share at least one cryptographic key. We consider the wireless connectivity of the network using a heterogeneous on-off channel model, where the channel between a class- @math node and a class- @math node is on (respectively, off) with probability @math (respectively, @math ) for @math . Collectively, two sensor nodes are adjacent if they i) share a cryptographic key and ii) have a wireless channel in between that is on. We model the overall network using a composite random graph obtained by the intersection of inhomogeneous random key graphs (IRKG) @math with inhomogeneous Erd o s-R 'enyi graphs (IERG) @math . The former graph is naturally induced by the heterogeneous random key predistribution scheme, while the latter is induced by the heterogeneous on-off channel model. More specifically, two nodes are adjacent in the composite graph if they are i) adjacent in the IRKG i.e., share a cryptographic key and ii) adjacent in the IERG, i.e., have an available wireless channel. We investigate the connectivity of the composite random graph and present conditions (in the form of zero-one laws) on how to scale its parameters so that it i) has no secure node which is isolated and ii) is securely connected, both with high probability when the number of nodes gets large. We also present numerical results to support these zero-one laws in the finite-node regime. | In comparison with the existing literature on similar models, our result can be seen to extend the work by Eletreby and Ya g an in @cite_8 (respectively, @cite_18 ). Therein, the authors established a zero-one law for the @math -connectivity (respectively, @math -connectivity) of @math , i.e., for a wireless sensor network under the heterogeneous key predistribution scheme and a uniform on-off channel model. Although these results form a crucial starting point towards the analysis of the heterogeneous key predistribution scheme under a wireless connectivity model, they are limited to uniform on-off channel model where all channels are on (respectively, off) with the same probability @math (respectively, @math ). The heterogeneous on-off channel model accounts for the fact that different nodes could have different radio capabilities, or could be deployed in locations with different channel characteristics. In addition, it offers the flexibility of modeling several interesting scenarios, such as when nodes of the same type are more (or less) likely to be adjacent with one another than with nodes belonging to other classes. Indeed, by setting @math for @math and each @math , our results reduce to those given in @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_8"
],
"mid": [
"2555492793",
"2512707330",
"2963495066",
"2196704028"
],
"abstract": [
"We consider secure and reliable connectivity in wireless sensor networks that utilize the heterogeneous random key predistribution scheme. We model the unreliability of wireless links by an on off channel model that induces an Erdős-Renyi graph, while the heterogeneous scheme induces an inhomogeneous random key graph. The overall network can thus be modeled by the intersection of both graphs. We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model, so that with high probability: i) all of its nodes are connected to at least @math other nodes, i.e., the minimum node degree of the graph is no less than @math , and ii) the graph is @math -connected, i.e., the graph remains connected even if any @math nodes leave the network. These results are shown to complement and generalize several previous results in the literature. We also present numerical results to support our findings in the finite-node regime. Finally, we demonstrate via simulations that our results are also useful when the on off channel model is replaced with the more realistic disk communication model .",
"We consider wireless sensor networks under a heterogeneous random key predistribution scheme and an on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yagan - as an extension to the Eschenauer and Gligor scheme - for the cases when the network consists of sensor nodes with varying level of resources and or connectivity requirements, e.g., regular nodes vs. cluster heads. The network is modeled by the intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Renyi graph (induced by the on off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support our results in the finite-node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring the minimum node degree being no less than k.",
"We investigate the connectivity of a wireless sensor network (WSN) secured by the heterogeneous key predistribution scheme under an independent on off channel model. The heterogeneous scheme induces an inhomogeneous random key graph, denoted by @math and the on off channel model induces an Erdős-Renyi graph, denoted by @math . Hence, the overall random graph modeling the WSN is obtained by the intersection of @math and @math . We present conditions on how to scale the parameters of the intersecting graph with respect to the network size @math such that the graph i) has no isolated nodes and ii) is connected, both with high probability (whp) as the number of nodes gets large. Our results are supported by a simulation study demonstrating that i) despite their asymptotic nature, our results can in fact be useful in designing finite -node WSNs so that they achieve secure connectivity whp; and ii) despite the simplicity of the on off communication model, the probability of connectivity in the resulting WSN approximates very well the case where the disk model is used.",
"We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph."
]
} |
1908.10017 | 2970793270 | The state-of-art DNN structures involve intensive computation and high memory storage. To mitigate the challenges, the memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. However, the high accuracy solution for extreme model compression on memristor crossbar array architecture is still waiting for unraveling. In this paper, we propose a memristor-based DNN framework which combines both structured weight pruning and quantization by incorporating alternating direction method of multipliers (ADMM) algorithm for better pruning and quantization performance. We also discover the non-optimality of the ADMM solution in weight pruning and the unused data path in a structured pruned model. Motivated by these discoveries, we design a software-hardware co-optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms targeting on post-processing a structured pruned model after ADMM steps. By taking memristor hardware constraints into our whole framework, we achieve extreme high compression ratio on the state-of-art neural network structures with minimum accuracy loss. For quantizing structured pruned model, our framework achieves nearly no accuracy loss after quantizing weights to 8-bit memristor weight representation. We share our models at anonymous link this https URL. | Heuristic weight pruning methods @cite_4 are widely used in neuromorphic computing designs to reduce the weight storage and computing delay @cite_20 . @cite_20 implemented weight pruning techniques on a neuromorphic computing system using irregular pruning caused unbalanced workload, greater circuits overheads and extra memory requirement on indices. To overcome the limitations, @cite_21 proposed group connection deletion, which structually prunes connections to reduce routing congestion between memristor crossbar arrays. | {
"cite_N": [
"@cite_21",
"@cite_4",
"@cite_20"
],
"mid": [
"2884180697",
"2798170643",
"2657126969",
"2515385951"
],
"abstract": [
"Weight pruning methods of deep neural networks have been demonstrated to achieve a good model pruning ratio without loss of accuracy, thereby alleviating the significant computation storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, the pruning ratio and GPU acceleration are limited when accuracy needs to be maintained. In this work, we overcome pruning ratio and GPU acceleration limitations by proposing a unified, systematic framework of structured weight pruning for DNNs, named ADAM-ADMM. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. A significant improvement in structured weight pruning ratio is achieved without loss of accuracy, along with fast convergence rate. With a small sparsity degree of 33.3 on the convolutional layers, we achieve 1.64 accuracy enhancement for the AlexNet model. This is obtained by mitigation of overfitting. Without loss of accuracy on the AlexNet model, we achieve 2.58x and 3.65x average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 2.77x and 7.5x when allowing a moderate accuracy loss of 2 . In this case the model compression for convolutional layers is 13.2x, corresponding to 10.5x CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework. Our models and codes are released at this https URL",
"Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate.",
"As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88 , 82 , and 53 . In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks."
]
} |
1908.10017 | 2970793270 | The state-of-art DNN structures involve intensive computation and high memory storage. To mitigate the challenges, the memristor crossbar array has emerged as an intrinsically suitable matrix computation and low-power acceleration framework for DNN applications. However, the high accuracy solution for extreme model compression on memristor crossbar array architecture is still waiting for unraveling. In this paper, we propose a memristor-based DNN framework which combines both structured weight pruning and quantization by incorporating alternating direction method of multipliers (ADMM) algorithm for better pruning and quantization performance. We also discover the non-optimality of the ADMM solution in weight pruning and the unused data path in a structured pruned model. Motivated by these discoveries, we design a software-hardware co-optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms targeting on post-processing a structured pruned model after ADMM steps. By taking memristor hardware constraints into our whole framework, we achieve extreme high compression ratio on the state-of-art neural network structures with minimum accuracy loss. For quantizing structured pruned model, our framework achieves nearly no accuracy loss after quantizing weights to 8-bit memristor weight representation. We share our models at anonymous link this https URL. | Weight quantization can mitigate hardware imperfection of memristor including state drift and process variations, caused by the imperfect fabrication process or by the device feature itself @cite_3 @cite_10 . @cite_1 presented a technique to reduce the overhead of Digital-to-Analog Converters (DACs) Analog-to-Digital Converters (ADCs) in resistive random-access memory (ReRAM) neuromorphic computing systems. They first normalized the data, and then quantized intermediary data to 1-bit value. This can be directly used as the analog input for ReRAM crossbar and, hence, avoids the need of DACs. | {
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_3"
],
"mid": [
"2098725264",
"1801517207",
"2154300649",
"2408724663"
],
"abstract": [
"As communication systems scale up in speed and bandwidth, the cost and power consumption of high-precision (e.g., 8-12 bits) analog-to-digital conversion (ADC) becomes the limiting factor in modern transceiver architectures based on digital signal processing. In this work, we explore the impact of lowering the precision of the ADC on the performance of the communication link. Specifically, we evaluate the communication limits imposed by low-precision ADC (e.g., 1-3 bits) for transmission over the real discrete-time additive white Gaussian noise (AWGN) channel, under an average power constraint on the input. For an ADC with K quantization bins (i.e., a precision of log2 K bits), we show that the input distribution need not have any more than K+1 mass points to achieve the channel capacity. For 2-bin (1-bit) symmetric quantization, this result is tightened to show that binary antipodal signaling is optimum for any signal-to- noise ratio (SNR). For multi-bit quantization, a dual formulation of the channel capacity problem is used to obtain tight upper bounds on the capacity. The cutting-plane algorithm is employed to compute the capacity numerically, and the results obtained are used to make the following encouraging observations : (a) up to a moderately high SNR of 20 dB, 2-3 bit quantization results in only 10-20 reduction of spectral efficiency compared to unquantized observations, (b) standard equiprobable pulse amplitude modulated input with quantizer thresholds set to implement maximum likelihood hard decisions is asymptotically optimum at high SNR, and works well at low to moderate SNRs as well.",
"This paper considers automatic gain control (AGC) and quantization for multiple-input multiple-output (MIMO) wireless systems. We examine the effect of clipping and quantization on capacity and bit error rate (BER). We find that even quite low resolution quantizers can perform close to the capacity of ideal unquantized systems. Results are presented for BPSK and M-ary QAM, and for 2times2, 3times3, and 4times4 MIMO configurations. We find that in each case less than 6 quantizer bits are required to achieve 98 of unquantized capacity for SNRs above 15 dB",
"A skip and fill algorithm is developed to digitally self-calibrate pipelined analog-to-digital converters (ADC's) in real time. The proposed digital calibration technique is applicable to capacitor-ratioed multiplying digital-to-analog converters (MDACs) commonly used in multistep or pipelined ADCs. This background calibration process can replace, in effect, a trimming procedure usually done in the factory with a hidden electronic calibration. Unlike other self-calibration techniques working in the foreground, the proposed technique is based on the concept of skipping conversion cycles randomly but filling in data later by nonlinear interpolation. This opens up the feasibility of digitally implementing calibration hardware and simplifying the task of self-calibrating multistep or pipelined ADCs. The proposed method improves the performance of the inherently fast ADCs by maintaining simple system architectures. To measure errors resulting from capacitor mismatch, of amp DC gain, offset, and switch feedthrough in real time, the calibration test signal is injected in place of the input signal using a split-reference injection technique. Ultimately, the missing signal within two-thirds of the Nyquist bandwidth is recovered with 16-b accuracy using a forty-fourth order polynomial interpolation, behaving essentially as an FIR filter,.",
"Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging met al-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (AD-Cs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80 area and more than 95 energy while maintaining the same or comparable classification accuracy of CNN on MNIST."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | : Open set recognition was first introduced in @cite_19 , which considers the problem of detecting unseen classes that are never seen in the training phase @cite_2 @cite_27 . Many open-set recognition methods based on SVM @cite_31 @cite_24 and NCM @cite_9 have since been proposed, but all built on shallow models for classification. @cite_19 formulated the problem of open set recognition for static one-vs-all learning scenario by balancing open space risk while minimizing empirical error,going on to extend the work to multi-class settings by introducing a compact abating probability model @cite_34 . For the scalability problem, @cite_7 proposed the use of a scalable Weibull based calibration for hypothesis generation to model matching scores, but did not address its use for the general recognition problem. @cite_9 proposed a novel detection method dealing with deep model architecture by introducing an openmax layer, while @cite_16 proposed a one class classification based on the DCNN which can be used as a novel detector and outlier detector for a single known class. However, none have not addressed the problem of how to incrementally update their model after a new class has been recognized. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_16",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_34"
],
"mid": [
"2248269543",
"2018459374",
"2895752198",
"2119880843"
],
"abstract": [
"In this paper, we propose a novel multiclass classifier for the open-set recognition scenario. This scenario is the one in which there are no a priori training samples for some classes that might appear during testing. Usually, many applications are inherently open set. Consequently, successful closed-set solutions in the literature are not always suitable for real-world recognition problems. The proposed open-set classifier extends upon the Nearest-Neighbor (NN) classifier. Nearest neighbors are simple, parameter independent, multiclass, and widely used for closed-set problems. The proposed Open-Set NN (OSNN) method incorporates the ability of recognizing samples belonging to classes that are unknown at training time, being suitable for open-set recognition. In addition, we explore evaluation measures for open-set problems, properly measuring the resilience of methods to unknown classes during testing. For validation, we consider large freely-available benchmarks with different open-set recognition regimes and demonstrate that the proposed OSNN significantly outperforms their counterparts in the literature.",
"Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multiclass setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks.",
"In open set recognition, a classifier must label instances of known classes while detecting instances of unknown classes not encountered during training. To detect unknown classes while still generalizing to new instances of existing classes, we introduce a dataset augmentation technique that we call counterfactual image generation. Our approach, based on generative adversarial networks, generates examples that are close to training set examples yet do not belong to any training category. By augmenting training with examples generated by this optimization, we can reformulate open set recognition as classification with one additional class, which includes the set of novel and unknown examples. Our approach outperforms existing open set recognition algorithms on a selection of image classification tasks.",
"To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of “closed set” recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is “open set” recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel “1-vs-set machine,” which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | : different from incremental learning problem, other researchers have proposed tree based classification methods to address the scalability of object categories in large scale visual recognition challenges @cite_6 @cite_10 @cite_18 @cite_4 . Recent advances in the deep learning domain @cite_1 @cite_13 of scalable learning have resulted in state of the art performances, which are extremely useful when the goal is to maximize classification recognition performance. These systems assume a priori availability of comprehensive training data containing both images and categories. However, adapting such methods to a dynamic learning scenario becomes extremely challenging. Adding object categories requires retraining the entire system, which could be unfeasible for many applications. As a result, these methods are scalable but not incremental. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_10"
],
"mid": [
"2166049352",
"2155904486",
"1929903369",
"2340646384"
],
"abstract": [
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.",
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.",
"Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.",
"As we enter into the big data age and an avalanche of images have become readily available, recognition systems face the need to move from close, lab settings where the number of classes and training data are fixed, to dynamic scenarios where the number of categories to be recognized grows continuously over time, as well as new data providing useful information to update the system. Recent attempts, like the open world recognition framework, tried to inject dynamics into the system by detecting new unknown classes and adding them incrementally, while at the same time continuously updating the models for the known classes. incrementally adding new classes and detecting instances from unknown classes, while at the same time continuously updating the models for the known classes. In this paper we argue that to properly capture the intrinsic dynamic of open world recognition, it is necessary to add to these aspects (a) the incremental learning of the underlying metric, (b) the incremental estimate of confidence thresholds for the unknown classes, and (c) the use of local learning to precisely describe the space of classes. We extend three existing metric learning algorithms towards these goals by using online metric learning. Experimentally we validate our approach on two large-scale datasets in different learning scenarios. For all these scenarios our proposed methods outperform their non-online counterparts. We conclude that local and online learning is important to capture the full dynamics of open world recognition."
]
} |
1908.09931 | 2971181831 | At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase. However, real-world problem is far more challenging because: i) new classes unseen in the training phase can appear when predicting; ii) discriminative features need to evolve when new classes emerge in real time; and iii) instances in new classes may not follow the "independent and identically distributed" (iid) assumption. Most existing work only aims to detect the unknown classes and is incapable of continuing to learn newer classes. Although a few methods consider both detecting and including new classes, all are based on the predefined handcrafted features that cannot evolve and are out-of-date for characterizing emerging classes. Thus, to address the above challenges, we propose a novel generic end-to-end framework consisting of a dynamic cascade of classifiers that incrementally learn their dynamic and inherent features. The proposed method injects dynamic elements into the system by detecting instances from unknown classes, while at the same time incrementally updating the model to include the new classes. The resulting cascade tree grows by adding a new leaf node classifier once a new class is detected, and the discriminative features are updated via an end-to-end learning strategy. Experiments on two real-world datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. | Open world recognition considers both detection and learning to distinguish the new classes. proposed a NCM learning algorithm that relies on the estimation of a determined threshold in conjunction with the threshold counts on some known new classes @cite_38 . For a more practical situation, proposed an online-learning approach that involves the NBC classifier instead of NCM @cite_28 , while @cite_8 proposed an online learning for streaming data where new classes come continuously. It is worth noting that Bayesian non-parametric models @cite_32 @cite_12 are not related to our problem. Though they were originally proposed to identify mixed components or clusters in the test data that may cover unseen classes, their clusters are not themselves classes and multiple clusters must be mapped to one class manually. | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_12"
],
"mid": [
"2340646384",
"2248269543",
"2808523546",
"2895752198"
],
"abstract": [
"As we enter into the big data age and an avalanche of images have become readily available, recognition systems face the need to move from close, lab settings where the number of classes and training data are fixed, to dynamic scenarios where the number of categories to be recognized grows continuously over time, as well as new data providing useful information to update the system. Recent attempts, like the open world recognition framework, tried to inject dynamics into the system by detecting new unknown classes and adding them incrementally, while at the same time continuously updating the models for the known classes. incrementally adding new classes and detecting instances from unknown classes, while at the same time continuously updating the models for the known classes. In this paper we argue that to properly capture the intrinsic dynamic of open world recognition, it is necessary to add to these aspects (a) the incremental learning of the underlying metric, (b) the incremental estimate of confidence thresholds for the unknown classes, and (c) the use of local learning to precisely describe the space of classes. We extend three existing metric learning algorithms towards these goals by using online metric learning. Experimentally we validate our approach on two large-scale datasets in different learning scenarios. For all these scenarios our proposed methods outperform their non-online counterparts. We conclude that local and online learning is important to capture the full dynamics of open world recognition.",
"In this paper, we propose a novel multiclass classifier for the open-set recognition scenario. This scenario is the one in which there are no a priori training samples for some classes that might appear during testing. Usually, many applications are inherently open set. Consequently, successful closed-set solutions in the literature are not always suitable for real-world recognition problems. The proposed open-set classifier extends upon the Nearest-Neighbor (NN) classifier. Nearest neighbors are simple, parameter independent, multiclass, and widely used for closed-set problems. The proposed Open-Set NN (OSNN) method incorporates the ability of recognizing samples belonging to classes that are unknown at training time, being suitable for open-set recognition. In addition, we explore evaluation measures for open-set problems, properly measuring the resilience of methods to unknown classes during testing. For validation, we consider large freely-available benchmarks with different open-set recognition regimes and demonstrate that the proposed OSNN significantly outperforms their counterparts in the literature.",
"This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24 by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53 .",
"In open set recognition, a classifier must label instances of known classes while detecting instances of unknown classes not encountered during training. To detect unknown classes while still generalizing to new instances of existing classes, we introduce a dataset augmentation technique that we call counterfactual image generation. Our approach, based on generative adversarial networks, generates examples that are close to training set examples yet do not belong to any training category. By augmenting training with examples generated by this optimization, we can reformulate open set recognition as classification with one additional class, which includes the set of novel and unknown examples. Our approach outperforms existing open set recognition algorithms on a selection of image classification tasks."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | Selection or clustering points in PF have been studied with applications to MOO algorithms. Firstly, a motivation is to store representative elements of a large PF (exponential sizes of PF are possible @cite_45 ) for exact methods or population meta-heuristics. Maximizing the quality of discrete representations of Pareto sets was studied with the hypervolume measure in the Hypervolume Subset Selection (HSS) problem @cite_31 @cite_43 . Secondly, a crucial issue in the design of population meta-heuristics for MOO problems is to select relevant solutions for operators like cross-over or mutation phases in evolutionary algorithms @cite_2 @cite_38 . Selecting knee-points is another known approach for such goals @cite_42 . | {
"cite_N": [
"@cite_38",
"@cite_42",
"@cite_43",
"@cite_45",
"@cite_2",
"@cite_31"
],
"mid": [
"2592415617",
"2123497782",
"2434562129",
"133511943"
],
"abstract": [
"Recently, many meta-heuristic algorithms have been proposed to serve as the basis of a t-way test generation strategy (where t indicates the interaction strength) including Genetic Algorithms (GA), Ant Colony Optimization (ACO), Simulated Annealing (SA), Cuckoo Search (CS), Particle Swarm Optimization (PSO), and Harmony Search (HS). Although useful, meta-heuristic algorithms that make up these strategies often require specific domain knowledge in order to allow effective tuning before good quality solutions can be obtained. Hyper-heuristics provide an alternative methodology to meta-heuristics which permit adaptive selection and or generation of meta-heuristics automatically during the search process. This paper describes our experience with four hyper-heuristic selection and acceptance mechanisms namely Exponential Monte Carlo with counter (EMCQ), Choice Function (CF), Improvement Selection Rules (ISR), and newly developed Fuzzy Inference Selection (FIS), using the t-way test generation problem as a case study. Based on the experimental results, we offer insights on why each strategy differs in terms of its performance.",
"Recently, the hybridization between evolutionary algorithms and other metaheuristics has shown very good performances in many kinds of multiobjective optimization problems (MOPs), and thus has attracted considerable attentions from both academic and industrial communities. In this paper, we propose a novel hybrid multiobjective evolutionary algorithm (HMOEA) for real-valued MOPs by incorporating the concepts of personal best and global best in particle swarm optimization and multiple crossover operators to update the population. One major feature of the HMOEA is that each solution in the population maintains a nondominated archive of personal best and the update of each solution is in fact the exploration of the region between a selected personal best and a selected global best from the external archive. Before the exploration, a selfadaptive selection mechanism is developed to determine an appropriate crossover operator from several candidates so as to improve the robustness of the HMOEA for different instances of MOPs. Besides the selection of global best from the external archive, the quality of the external archive is also considered in the HMOEA through a propagating mechanism. Computational study on the biobjective and three-objective benchmark problems shows that the HMOEA is competitive or superior to previous multiobjective algorithms in the literature.",
"Given a nondominated point set of size and a suitable reference point , the Hypervolume Subset Selection Problem HSSP consists of finding a subset of size that maximizes the hypervolume indicator. It arises in connection with multiobjective selection and archiving strategies, as well as Pareto-front approximation postprocessing for visualization and or interaction with a decision maker. Efficient algorithms to solve the HSSP are available only for the 2-dimensional case, achieving a time complexity of . In contrast, the best upper bound available for is . Since the hypervolume indicator is a monotone submodular function, the HSSP can be approximated to a factor of using a greedy strategy. In this article, greedy -time algorithms for the HSSP in 2 and 3 dimensions are proposed, matching the complexity of current exact algorithms for the 2-dimensional case, and considerably improving upon recent complexity results for this approximation problem.",
"Most problems encountered in practice involve the optimization of multiple criteria. Usually, some of them are conflicting such that no single solution is simultaneously optimal with respect to all criteria, but instead many incomparable compromise solutions exist. In recent years, evidence has accumulated showing that Evolutionary Algorithms (EAs) are effective means of finding good approximate solutions to such problems. One of the crucial parts of EAs consists of repeatedly selecting suitable solutions. In this process, the two key issues are as follows: first, a solution that is better than another solution in all objectives should be preferred over the latter. Second, the diversity of solutions should be supported, whereby often user preference dictates what constitutes a good diversity. The hypervolume offers one possibility to achieve the two aspects; for this reason, it has been gaining increasing importance in recent years. The present thesis investigates three central topics of the hypervolume that are still unsolved: 1: Although more and more EAs use the hypervolume as selection criterion, the resulting distribution of points favored by the hypervolume has scarcely been investigated so far. Many studies only speculate about this question, and in parts contradict one another. 2: The computational load of the hypervolume calculation sharply increases the more criteria are considered. This hindered so far the application of the hypervolume to problems with more than about five criteria. 3: Often a crucial aspect is to maximize the robustness of solutions, which is characterized by how far the properties of a solution can degenerate when implemented in practice. So far, no attempt has been made to consider robustness of solutions within hypervolume-based search."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | The HSS problem, maximizing the representativity of @math solutions among a PF of size @math , is known to be NP-hard in dimension 3 (and greater) since @cite_10 . An exact algorithm in @math and a polynomial-time approximation scheme for any constant dimension @math are also provided in @cite_10 . The 2d case is solvable in polynomial time thanks to a DP algorithm with a complexity in @math time and @math space provided in @cite_31 . The time complexity of the DP algorithm was improved in @math by @cite_26 and in @math by @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_31",
"@cite_26",
"@cite_10"
],
"mid": [
"2001663593",
"1986948282",
"2952621790",
"2056492141"
],
"abstract": [
"par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .",
"The traveling salesman problem (TSP) is a canonical NP-complete problem which is proved by Trevisan [SIAM J. Comput., 30 (2000), pp. 475--485] to be MAX-SNP hard even on high-dimensional Euclidean metrics. To circumvent this hardness, researchers have been developing approximation schemes for „simpler” instances of the problem. For instance, the algorithms of Arora and of Talwar show how to approximate TSP on low-dimensional metrics (for different notions of metric dimension). However, a feature of most current notions of metric dimension is that they are „local”: the definitions require every local neighborhood to be well-behaved. In this paper, we define a global notion of dimension that generalizes the popular notion of doubling dimension, but still allows some small dense regions; e.g., it allows some metrics that contain cliques of size @math . Given a metric with global dimension @math , we give a @math -approximation algorithm that runs in subexponential time, i.e., in $ (O(n^...",
"Let @math be a nontrivial @math -ary predicate. Consider a random instance of the constraint satisfaction problem @math on @math variables with @math constraints, each being @math applied to @math randomly chosen literals. Provided the constraint density satisfies @math , such an instance is unsatisfiable with high probability. The problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate @math supports a @math - probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree @math (which runs in time @math ) refute a random instance of @math . In particular, the polynomial-time SOS algorithm requires @math constraints to refute random instances of CSP @math when @math supports a @math -wise uniform distribution on its satisfying assignments. Together with recent work of [LRS15], our result also implies that polynomial-size semidefinite programming relaxation for refutation requires at least @math constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate @math , they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of [AOW15] and [RRS16], this full three-way tradeoff is , up to lower-order factors.",
"Let p > 1 be any fixed real. We show that assuming NP n RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in e p norm within a constant factor. Under the stronger assumption NP n RTIME(2poly(log n)), we show that there is no polynomial-time algorithm with approximation ratio 2(log n)1 2−e where n is the dimension of the lattice and e > 0 is an arbitrarily small constant.We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product, a new variant of tensor product that we introduce. This enables us to boost the hardness factor to 2(log n)1 2-e."
]
} |
1908.09648 | 2969329859 | This paper is motivated by real-life applications of bi-objective optimization. Having many non dominated solutions, one wishes to cluster the Pareto front using Euclidian distances. The p-center problems, both in the discrete and continuous versions, are proven solvable in polynomial time with a common dynamic programming algorithm. Having @math points to partition in @math clusters, the complexity is proven in @math (resp @math ) time and @math memory space for the continuous (resp discrete) @math -center problem. @math -center problems have complexities in @math . To speed-up the algorithm, parallelization issues are discussed. A posteriori, these results allow an application inside multi-objective heuristics to archive partial Pareto Fronts. | We note that an affine 2d PF is a line in @math , clustering is equivalent to 1 dimensional cases. 1-dimension K-means was proven to be solvable in polynomial time with a DP algorithm in @math time and @math space. This complexity was improved for a DP algorithm in @math time and @math space in @cite_5 . This is thus the complexity of K-means in an affine 2d PF. The specific case, already mentioned in the previous section, of the continuous p-center problem with centers on a straight line is more general that the case of an affine 2d PF, with a complexity proven in @math time and @math space by @cite_34 . 2d cases of clustering problems can also be seen as specific cases of three-dimensional (3d) PF, affine 3d PF. Having NP-hard complexities proven for planar cases of clustering, which is the case for k-means, p-median, k-medoids, p-center problems since @cite_44 @cite_1 , it implies that the considered clustering problems are also NP-hard for 3d PF. | {
"cite_N": [
"@cite_44",
"@cite_5",
"@cite_34",
"@cite_1"
],
"mid": [
"2229238337",
"1984675750",
"1967960319",
"1976392590"
],
"abstract": [
"@d can be approximated up to (1 + e)-factor, for an arbitrary small e > 0, using the O(k e2)-rank approximation of A and a constant. This implies, for example, that the optimal k-means clustering of the rows of A is (1 + e)-approximated by an optimal k-means clustering of their projection on the O(k e2) first right singular vectors (principle components) of A. A (j, k)-coreset for projective clustering is a small set of points that yields a (1 + e)-approximation to the sum of squared distances from the n rows of A to any set of k affine subspaces, each of dimension at most j. Our embedding yields (0, k)-coresets of size O(k) for handling k-means queries, (j, 1)-coresets of size O(j) for PCA queries, and (j, k)-coresets of size (log n)O(jk) for any j, k ≥ 1 and constant e e (0, 1 2). Previous coresets usually have a size which is linearly or even exponentially dependent of d, which makes them useless when d n. Using our coresets with the merge-and-reduce approach, we obtain embarrassingly parallel streaming algorithms for problems such as k-means, PCA and projective clustering. These algorithms use update time per point and memory that is polynomial in log n and only linear in d. For cost functions other than squared Euclidean distances we suggest a simple recursive coreset construction that produces coresets of size",
"(MATH) Let P be a set of n points in @math k (P) denote the minimum over all k-flats @math of max peP Dist(p, ). We present an algorithm that computes, for any 0 k (P) from each point of P. The running time of the algorithm is dnO(k e5log(1 e)). The crucial step in obtaining this algorithm is a structural result that says that there is a near-optimal flat that lies in an affine subspace spanned by a small subset of points in P. The size of this \"core-set\" depends on k and e but is independent of the dimension.This approach also extends to the case where we want to find a k-flat that is close to a prescribed fraction of the entire point set, and to the case where we want to find j flats, each of dimension k, that are close to the point set. No efficient approximation schemes were known for these problems in high-dimensions, when k>1 or j>1.",
"This series of papers studies a geometric structure underlying Karmarkar's projective scaling algorithm for solving linear programming problems. A basic feature of the projective scaling algorithm is a vector field depending on the objective function which is defined on the interior of the polytope of feasible solutions of the linear program. The geometric structure studied is the set of trajectories obtained by integrating this vector field, which we call Ptrajectories. We also study a related vector field, the affine scaling vector field, and its associated trajectories, called A-trajectories. The affine scaling vector field is associated to another linear programming algorithm, the affine scaling algorithm. Affine and projective scaling vector fields are each defined for linear programs of a special form, called strict standard form and canonical form, respectively. This paper derives basic properties of P-trajectories and A-trajectones. It reviews the projective and affine scaling algorithms, defines the projective and affine scaling vector fields, and gives differential equations for P-trajectories and A-trajectories. It shows that projective transformations map P-trajectories into P-trajectories. It presents Karmarkar's interpretation of A-trajectories as steepest descent paths of the objective function (c, x) with respect to the Riemannian geometry ds2 Z?= dx, dx, x2 restricted to the relative interior of the polytope of feasible solutions. P-trajectories of a canonical form linear program are radial projections of A-trajectories of an associated standard form linear program. As a consequence there is a polynomial time linear programming algorithm using the affine scaling vector field of this associated linear program: This algorithm is essentially Karmarkar's algorithm. These trajectories are studied in subsequent papers by two nonlinear changes of variables called Legendre transform coordinates and projective Legendre transform coordinates, respectively. It will be shown that P-trajectories have an algebraic and a geometric interpretation. They are algebraic curves, and they are geodesics (actually distinguished chords) of a geometry isometric to a Hilbert geometry on a polytope combinatorially dual to the polytope of feasible solutions. The A-trajectories of strict standard form linear programs have similar interpretations: They are algebraic curves, and are geodesics of a geometry isometric to Euclidean geometry. Received by the editors July 28, 1986 and, in revised form, September 28, 1987 and March 21, 1988. 1980 Mathematics Subject Classification (1985 Revision). Primary 90C05; Secondary 52A40, 34A34. Research of the first author partially supported by ONR contract N00014-87-K0214. (D 1989 American Mathematical Society 0002-9947 89 @math .25 per page",
"Multipolynomial resultants provide the most efficient methods known (in terms as asymptoticcomplexity) for solving certain systems of polynomial equations or eliminating variables (, 1988). The resultant of f\"1, ..., f\"n in K[x\"1,...,x\"m] will be a polynomial in m-n+1 variables which is zero when the system f\"1=0 has a solution in ^m ( the algebraic closure of K). Thus the resultant defines a projection operator from ^m to ^(^m^-^n^+^1^). However, resultants are only exact conditions for homogeneous systems, and in the affine case just mentioned, the resultant may be zero even if the system has no affine solution. This is most serious when the solution set of the system of polynomials has ''excess components'' (components of dimension >m-n), which may not even be affine, since these cause the resultant to vanish identically. In this paper we describe a projection operator which is not identically zero, but which is guaranteed to vanish on all the proper (dimension=m-n) components of the system f\"i=0. Thus it fills the role of a general affine projection operator or variable elimination ''black box'' which can be used for arbitrary polynomial systems. The construction is based on a generalisation of the characteristic polynomial of a linear system to polynomial systems. As a corollary, we give a single-exponential time method for finding all the isolated solution points of a system of polynomials, even in the presence of infinitely many solutions, at infinity or elsewhere."
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | The problem of successive POI recommendation has received much attention recently @cite_22 @cite_8 @cite_23 @cite_11 . To predict where a user will visit next, we need to consider the relationship between POIs. However, existing private recommendation methods @cite_1 @cite_7 @cite_13 only focus on learning the relationship between users and items. Our research direction is to incorporate the relationship between POIs by adapting the transfer learning approach @cite_18 @cite_12 @cite_16 . Most transfer learning methods in collaborative filtering utilize auxiliary domain data by sharing the latent matrix between two different domain. In our work, we use two domain data from users' check-in history: visiting counts and POI transition patterns. We assume that the POI latent factors can bridge the user-POI and POI-POI relationships. To figure out the POI-POI relationship, we build a POI-POI matrix, which represents global preference transitions between two POIs. After that, in the learning process, users update their profile vector based on the visiting counts which describe user-POI relationship. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2567312369",
"2964057288",
"2084677224",
"2044672016"
],
"abstract": [
"Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20 in [email protected] and [email protected]",
"Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, \"work\" on weekday and \"entertainment\" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20 on both datasets for all metrics.",
"Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.",
"With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy."
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | Differential privacy @cite_15 is a rigorous privacy standard that requires the output of a DP mechanism should not reveal information specific to any individuals. DP requires a trusted data curator who collects original data from users. Recently, a local version of DP has been proposed. In the local setting, each user perturbs his her data and sends perturbed data to the data curator. Since the original data never leave users' devices, LDP mechanisms have the benefit of not requiring trusted data curator. Accordingly, many companies attempt to adopt LDP to collect data from the clients privately @cite_5 @cite_20 @cite_2 @cite_10 . | {
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2532967691",
"2950356050",
"2963629772",
"2930558539"
],
"abstract": [
"In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.",
"Local differential privacy (LDP) is a recently proposed privacy standard for collecting and analyzing data, which has been used, e.g., in the Chrome browser, iOS and macOS. In LDP, each user perturbs her information locally, and only sends the randomized version to an aggregator who performs analyses, which protects both the users and the aggregator against private information leaks. Although LDP has attracted much research attention in recent years, the majority of existing work focuses on applying LDP to complex data and or analysis tasks. In this paper, we point out that the fundamental problem of collecting multidimensional data under LDP has not been addressed sufficiently, and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under LDP. Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance. Then, we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes, where our mechanisms always outperform existing solutions regarding worst-case noise variance. As a case study, we apply our solutions to build an LDP-compliant stochastic gradient descent algorithm (SGD), which powers many important machine learning tasks. Experiments using real datasets confirm the effectiveness of our methods, and their advantages over existing solutions.",
"Local differential privacy (LDP) is a recently proposed privacy standard for collecting and analyzing data, which has been used, e.g., in the Chrome browser, iOS and macOS. In LDP, each user perturbs her information locally, and only sends the randomized version to an aggregator who performs analyses, which protects both the users and the aggregator against private information leaks. Although LDP has attracted much research attention in recent years, the majority of existing work focuses on applying LDP to complex data and or analysis tasks. In this paper, we point out that the fundamental problem of collecting multidimensional data under LDP has not been addressed sufficiently, and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under LDP. Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance. Then, we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes, where our mechanisms always outperform existing solutions regarding worst-case noise variance. As a case study, we apply our solutions to build an LDP-compliant stochastic gradient descent algorithm (SGD), which powers many important machine learning tasks. Experiments using real datasets confirm the effectiveness of our methods, and their advantages over existing solutions.",
"Local differential privacy (LDP), where each user perturbs her data locally before sending to an untrusted data collector, is a new and promising technique for privacy-preserving distributed data collection. The advantage of LDP is to enable the collector to obtain accurate statistical estimation on sensitive user data (e.g., location and app usage) without accessing them. However, existing work on LDP is limited to simple data types, such as categorical, numerical, and set-valued data. To the best of our knowledge, there is no existing LDP work on key-value data, which is an extremely popular NoSQL data model and the generalized form of set-valued and numerical data. In this paper, we study this problem of frequency and mean estimation on key-value data by first designing a baseline approach PrivKV within the same \"perturbation-calibration\" paradigm as existing LDP techniques. To address the poor estimation accuracy due to the clueless perturbation of users, we then propose two iterative solutions PrivKVM and PrivKVM+ that can gradually improve the estimation results through a series of iterations. An optimization strategy is also presented to reduce network latency and increase estimation accuracy by introducing virtual iterations in the collector side without user involvement. We verify the correctness and effectiveness of these solutions through theoretical analysis and extensive experimental results."
]
} |
1908.09485 | 2969944405 | A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history. | There are several works applying DP LDP on the recommendation system @cite_1 @cite_7 @cite_13 . @cite_1 proposed an objective function perturbation method. In their work, a trusted data curator adds Laplace noises to the objective function so that the factorized item matrix satisfies DP. They also proposed a gradient perturbation method which can preserve the privacy of users' ratings from an untrusted data curator. @cite_7 proposed a probabilistic matrix factorization with personalized differential privacy. They used a random sampling method to satisfy different users' privacy requirements. Then, they applied the objective function perturbation method to obtain the perturbed item matrix. Finally, @cite_13 proposed a new recommendation system under LDP. Specifically, users update their profile vectors locally and submit perturbed gradients in the iterative factorization process. Further, to reduce the error incurred by perturbation, they adopted random projection for dimensionality reduction. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_7"
],
"mid": [
"2789607830",
"2930558539",
"2896723315",
"2532967691"
],
"abstract": [
"Recommender systems are collecting and analyzing user data to provide better user experience. However, several privacy concerns have been raised when a recommender knows user's set of items or their ratings. A number of solutions have been suggested to improve privacy of legacy recommender systems, but the existing solutions in the literature can protect either items or ratings only. In this paper, we propose a recommender system that protects both user's items and ratings. For this, we develop novel matrix factorization algorithms under local differential privacy (LDP). In a recommender system with LDP, individual users randomize their data themselves to satisfy differential privacy and send the perturbed data to the recommender. Then, the recommender computes aggregates of the perturbed data. This framework ensures that both user's items and ratings remain private from the recommender. However, applying LDP to matrix factorization typically raises utility issues with i) high dimensionality due to a large number of items and ii) iterative estimation algorithms. To tackle these technical challenges, we adopt dimensionality reduction technique and a novel binary mechanism based on sampling. We additionally introduce a factor that stabilizes the perturbed gradients. With MovieLens and LibimSeTi datasets, we evaluate recommendation accuracy of our recommender system and demonstrate that our algorithm performs better than the existing differentially private gradient descent algorithm for matrix factorization under stronger privacy requirements.",
"Local differential privacy (LDP), where each user perturbs her data locally before sending to an untrusted data collector, is a new and promising technique for privacy-preserving distributed data collection. The advantage of LDP is to enable the collector to obtain accurate statistical estimation on sensitive user data (e.g., location and app usage) without accessing them. However, existing work on LDP is limited to simple data types, such as categorical, numerical, and set-valued data. To the best of our knowledge, there is no existing LDP work on key-value data, which is an extremely popular NoSQL data model and the generalized form of set-valued and numerical data. In this paper, we study this problem of frequency and mean estimation on key-value data by first designing a baseline approach PrivKV within the same \"perturbation-calibration\" paradigm as existing LDP techniques. To address the poor estimation accuracy due to the clueless perturbation of users, we then propose two iterative solutions PrivKVM and PrivKVM+ that can gradually improve the estimation results through a series of iterations. An optimization strategy is also presented to reduce network latency and increase estimation accuracy by introducing virtual iterations in the collector side without user involvement. We verify the correctness and effectiveness of these solutions through theoretical analysis and extensive experimental results.",
"Probabilistic matrix factorization (PMF) plays a crucial role in recommendation systems. It requires a large amount of user data (such as user shopping records and movie ratings) to predict personal preferences, and thereby provides users high-quality recommendation services, which expose the risk of leakage of user privacy. Differential privacy, as a provable privacy protection framework, has been applied widely to recommendation systems. It is common that different individuals have different levels of privacy requirements on items. However, traditional differential privacy can only provide a uniform level of privacy protection for all users. In this paper, we mainly propose a probabilistic matrix factorization recommendation scheme with personalized differential privacy (PDP-PMF). It aims to meet users' privacy requirements specified at the item-level instead of giving the same level of privacy guarantees for all. We then develop a modified sampling mechanism (with bounded differential privacy) for achieving PDP. We also perform a theoretical analysis of the PDP-PMF scheme and demonstrate the privacy of the PDP-PMF scheme. In addition, we implement the probabilistic matrix factorization schemes both with traditional and with personalized differential privacy (DP-PMF, PDP-PMF) and compare them through a series of experiments. The results show that the PDP-PMF scheme performs well on protecting the privacy of each user and its recommendation quality is much better than the DP-PMF scheme.",
"In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings."
]
} |
1908.09550 | 2969760672 | In this paper, we propose a Customizable Architecture Search (CAS) approach to automatically generate a network architecture for semantic image segmentation. The generated network consists of a sequence of stacked computation cells. A computation cell is represented as a directed acyclic graph, in which each node is a hidden representation (i.e., feature map) and each edge is associated with an operation (e.g., convolution and pooling), which transforms data to a new layer. During the training, the CAS algorithm explores the search space for an optimized computation cell to build a network. The cells of the same type share one architecture but with different weights. In real applications, however, an optimization may need to be conducted under some constraints such as GPU time and model size. To this end, a cost corresponding to the constraint will be assigned to each operation. When an operation is selected during the search, its associated cost will be added to the objective. As a result, our CAS is able to search an optimized architecture with customized constraints. The approach has been thoroughly evaluated on Cityscapes and CamVid datasets, and demonstrates superior performance over several state-of-the-art techniques. More remarkably, our CAS achieves 72.3 mIoU on the Cityscapes dataset with speed of 108 FPS on an Nvidia TitanXp GPU. | Our work is inspired by @cite_0 @cite_35 . Unlike these methods, however, our work attempts to achieve a good tradeoff between system performance and the availability of the computational resource. In other words, our algorithm is optimized with some constraints from real applications. We notice that the recent DPC work @cite_25 is very related to ours. It addresses the dense image prediction problem via searching an efficient multi-scale architecture on the use of performance driven random search @cite_23 . Nevertheless, our work is different from @cite_25 . First of all, we have different objectives. Instead of targeting high-quality segmentation in @cite_25 , our solution is customizable to search for an optimized architecture which is constrained by the requirements of real applications. The generated architecture tries to keep a balance between the quality and limited computational resource. Secondly, our solution optimizes the architecture of the whole network including both backbone and multi-scale module, while @cite_25 focuses on multi-scale optimization. Finally, our method employs a lightweight network, which costs much less training time as compared to that of @cite_25 . | {
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_25",
"@cite_23"
],
"mid": [
"2751689814",
"2951104886",
"300523764",
"2902251695"
],
"abstract": [
"We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator's action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to the most accurate prior approximation scheme, while being the fastest. We show that our models generalize across datasets and across resolutions, and investigate a number of extensions of the presented approach. The results are shown in the supplementary video at this https URL",
"This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.",
"We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multi-scale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even \"off-the-self\" multi-scale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9 and 9.5 , respectively.",
"Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. @math GPU hours) makes it difficult to search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present that can learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08 test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6 @math fewer parameters. On ImageNet, our model achieves 3.1 better top-1 accuracy than MobileNetV2, while being 1.2 @math faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | This optimization problem is NP-hard @cite_9 , and was first introduced for the design of vacuum systems @cite_5 . It has then be studied independently in several different contexts, mainly dealing with network design: computer networks @cite_1 , social networks @cite_2 (more precisely modeling the communication paradigm @cite_6 @cite_16 @cite_4 ), but also other fields, such as auction systems @cite_3 and structural biology @cite_17 @cite_7 . Finally, we can mention the issue of hypergraph drawing, where, in addition to the connectivity constraints, one usually looks for graphs with additional properties ( planarity, having a tree-like structure... ) @cite_18 @cite_10 @cite_8 @cite_0 . This plethora of applications explains why this problem is known under different names, such as , or . For a comprehensive survey of the theoretical work done on this problem, see @cite_11 and the references therein. | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2104060777",
"137863291",
"2161190897",
"2009182597"
],
"abstract": [
"This paper studies a multi-facility network synthesis problem, called the Two-level Network Design (TLND) problem, that arises in the topological design of hierarchical communication, transportation, and electric power distribution networks. We are given an undirected network containing two types of nodes---primary and secondary---and fixed costs for installing either a primary or a secondary facility on each edge. Primary nodes require higher grade interconnections than secondary nodes, using the more expensive primary facilities. The TLND problem seeks a minimum cost connected design that spans all the nodes, and connects primary nodes via edges containing primary facilities; the design can use either primary or secondary edges to connect the secondary nodes. The TLND problem generalizes the well-known Steiner network problem and the hierarchical network design problem. In this paper, we study the relationship between alternative model formulations for this problem (e.g., directed and undirected models), and analyze the worst-case performance for a composite TLND heuristic based upon solving embedded subproblems (e.g., minimum spanning tree and either Steiner tree or shortest path subproblems). When the ratio of primary to secondary costs is the same for all edges and when we solve the embedded subproblems optimally, the worst-case performance ratio of the composite TLND heuristic is 4 3. This result applies to the hierarchical network design problem with constant primary-to-secondary cost ratio since its subproblems are shortest path and minimum spanning tree problems. For more general situations, we express the TLND heuristic worst-case ratio in terms of the performance of any heuristic used to solve the embedded Steiner tree subproblem. A companion paper develops and tests a dual ascent procedure that generates tight upper and lower bounds on the optimal value of a multi-level extension of this problem.",
"Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.",
"We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math .",
"Abstract This paper proposes a mathematical justification of the phenomenon of extreme congestion at a very limited number of nodes in very large networks. It is argued that this phenomenon occurs as a combination of the negative curvature property of the network together with minimum-length routing. More specifically, it is shown that in a large n-dimensional hyperbolic ball B of radius R viewed as a roughly similar model of a Gromov hyperbolic network, the proportion of traffic paths transiting through a small ball near the center is Θ(1), whereas in a Euclidean ball, the same proportion scales as Θ(1 R n−1). This discrepancy persists for the traffic load, which at the center of the hyperbolic ball scales as volume2(B), whereas the same traffic load scales as volume1+1 n (B) in the Euclidean ball. This provides a theoretical justification of the experimental exponent discrepancy observed by Narayan and Saniee between traffic loads in Gromov-hyperbolic networks from the Rocketfuel database and synthetic ..."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | Concerning the implementation of algorithms, previous works mainly focused on approximation, greedy and other heuristic techniques @cite_4 . To the best of our knowledge, the first exact algorithm was designed by Agarwal al @cite_17 @cite_7 in the context of structural biology, where the sought graph represents the contact relations between proteins of a macro-molecule, which has to be inferred from a hypergraph constructed by chemical experiments and mass spectrometry. In this work, the authors define a Mixed Integer Linear Programming (MILP) formulation of the problem, representing the connectivity constraints by flows. They also provide an enumeration method using their algorithm as a black box, by iteratively adding constraints to the MILP in order to forbid already found solutions. Both their optimization and enumeration algorithms were tested on some real-life (from a structural biology perspective) instances for which the contact graph was already known. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_17"
],
"mid": [
"2171662214",
"2159944419",
"1968429800",
"2151690061"
],
"abstract": [
"Motivation: With the exponential growth of expression and protein–protein interaction (PPI) data, the frontier of research in systems biology shifts more and more to the integrated analysis of these large datasets. Of particular interest is the identification of functional modules in PPI networks, sharing common cellular function beyond the scope of classical pathways, by means of detecting differentially expressed regions in PPI networks. This requires on the one hand an adequate scoring of the nodes in the network to be identified and on the other hand the availability of an effective algorithm to find the maximally scoring network regions. Various heuristic approaches have been proposed in the literature. Results: Here we present the first exact solution for this problem, which is based on integer-linear programming and its connection to the well-known prize-collecting Steiner tree problem from Operations Research. Despite the NP-hardness of the underlying combinatorial problem, our method typically computes provably optimal subnetworks in large PPI networks in a few minutes. An essential ingredient of our approach is a scoring function defined on network nodes. We propose a new additive score with two desirable properties: (i) it is scalable by a statistically interpretable parameter and (ii) it allows a smooth integration of data from various sources. We apply our method to a well-established lymphoma microarray dataset in combination with associated survival data and the large interaction network of HPRD to identify functional modules by computing optimal-scoring subnetworks. In particular, we find a functional interaction module associated with proliferation over-expressed in the aggressive ABC subtype as well as modules derived from non-malignant by-stander cells. Availability: Our software is available freely for non-commercial purposes at http: www.planet-lisa.net. Contact: [email protected]",
"Motivation: Inferring networks of proteins from biological data is a central issue of computational biology. Most network inference methods, including Bayesian networks, take unsupervised approaches in which the network is totally unknown in the beginning, and all the edges have to be predicted. A more realistic supervised framework, proposed recently, assumes that a substantial part of the network is known. We propose a new kernel-based method for supervised graph inference based on multiple types of biological datasets such as gene expression, phylogenetic profiles and amino acid sequences. Notably, our method assigns a weight to each type of dataset and thereby selects informative ones. Data selection is useful for reducing data collection costs. For example, when a similar network inference problem must be solved for other organisms, the dataset excluded by our algorithm need not be collected. Results: First, we formulate supervised network inference as a kernel matrix completion problem, where the inference of edges boils down to estimation of missing entries of a kernel matrix. Then, an expectation--maximization algorithm is proposed to simultaneously infer the missing entries of the kernel matrix and the weights of multiple datasets. By introducing the weights, we can integrate multiple datasets selectively and thereby exclude irrelevant and noisy datasets. Our approach is favorably tested in two biological networks: a metabolic network and a protein interaction network. Availability: Software is available on request. Contact: [email protected] Supplementary information: A supplementary report including mathematical details is available at www.cbrc.jp kato faem faem.html",
"We study the quantitative geometry of graphs in terms of their genus, using the structure of certain \"cut graphs,\" i.e. subgraphs whose removal leaves a planar graph. In particular, we give optimal bounds for random partitioning schemes, as well as various types of embeddings. Using these geometric primitives, we present exponentially improved dependence on genus for a number of problems like approximate max-flow min-cut theorems, approximations for uniform and nonuniform Sparsest Cut, treewidth approximation, Laplacian eigenvalue bounds, and Lipschitz extension theorems and related metric labeling problems. We list here a sample of these improvements. All the following statements refer to graphs of genus g, unless otherwise noted. • We show that such graphs admit an O(log g)-approximate multi-commodity max-flow min-cut theorem for the case of uniform demands. This bound is optimal, and improves over the previous bound of O(g) [KPR93, FT03]. For general demands, we show that the worst possible gap is O(log g + CP), where CP is the gap for planar graphs. This dependence is optimal, and already yields a bound of O(log g + √log n), improving over the previous bound of O(√g log n) [KLMN04]. • We give an O(√log g)-approximation for the uniform Sparsest Cut, balanced vertex separator, and treewidth problems, improving over the previous bound of O(g) [FHL05]. • If a graph G has genus g and maximum degree D, we show that the kth Laplacian eigenvalue of G is (log g)2 · O(kgD n), improving over the previous bound of g2·O(kgD n) [KLPT09]. There is a lower bound of Ω(kgD n), making this result almost tight. • We show that if (X, d) is the shortest-path metric on a graph of genus g and S ⊆ X, then every L-Lipschitz map f: S → Z into a Banach space Z admits an O(L log g)-Lipschitz extension f: X → Z. This improves over the previous bound of O(Lg) [LN05], and compares to a lower bound of Ω(L√log g). In a related way, we show that there is an O(log g)-approximation for the 0-extension problem on such graphs, improving over the previous O(g) bound. • We show that every n-vertex shortest-path metric on a graph of genus g embeds into L2 with distortion O(log g + √log n), improving over the previous bound of O(√g log n). Our result is asymptotically optimal for every dependence g = g(n).",
"The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism."
]
} |
1908.09586 | 2969470403 | Given a hypergraph @math , the Minimum Connectivity Inference problem asks for a graph on the same vertex set as @math with the minimum number of edges such that the subgraph induced by every hyperedge of @math is connected. This problem has received a lot of attention these recent years, both from a theoretical and practical perspective, leading to several implemented approximation, greedy and heuristic algorithms. Concerning exact algorithms, only Mixed Integer Linear Programming (MILP) formulations have been experimented, all representing connectivity constraints by the means of graph flows. In this work, we investigate the efficiency of a constraint generation algorithm, where we iteratively add cut constraints to a simple ILP until a feasible (and optimal) solution is found. It turns out that our method is faster than the previous best flow-based MILP algorithm on random generated instances, which suggests that a constraint generation approach might be also useful for other optimization problems dealing with connectivity constraints. At last, we present the results of an enumeration algorithm for the problem. | This MILP model was then improved recently by Dar al @cite_12 , who mainly reduced the number of variables and constraints of the formulation, but still representing the connectivity constraints by the means of flows. In addition, they also presented and implemented a number of (already known and new) reduction rules. This new MILP formulation together with the reduction rules were then compared to the algorithm of Agarwal al on randomly-generated instances. For every kind of tested hypergraphs (different number and sizes of hyperedges), they observed a drastic improvement of both the execution time and the maximum size of instances that could be solved. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2170546552",
"1565834133",
"823587708",
"2021217869"
],
"abstract": [
"Mulmuley [Mul12a] recently gave an explicit version of Noether’s Normalization lemma for ring of invariants of matrices under simultaneous conjugation, under the conjecture that there are deterministic black-box algorithms for polynomial identity testing (PIT). He argued that this gives evidence that constructing such algorithms for PIT is beyond current techniques. In this work, we show this is not the case. That is, we improve Mulmuley’s reduction and correspondingly weaken the conjecture regarding PIT needed to give explicit Noether Normalization. We then observe that the weaker conjecture has recently been nearly settled by the authors ([FS12]), who gave quasipolynomial size hitting sets for the class of read-once oblivious algebraic branching programs (ROABPs). This gives the desired explicit Noether Normalization unconditionally, up to quasipolynomial factors. As a consequence of our proof we give a deterministic parallel polynomial-time algorithm for deciding if two matrix tuples have intersecting orbit closures, under simultaneous conjugation. We also study the strength of conjectures that Mulmuley requires to obtain similar results as ours. We prove that his conjectures are stronger, in the sense that the computational model he needs PIT algorithms for is equivalent to the well-known algebraic branching program (ABP) model, which is provably stronger than the ROABP model. Finally, we consider the depth-3 diagonal circuit model as defined by Saxena [Sax08], as PIT algorithms for this model also have implications in Mulmuley’s work. Previous work (such as [ASS12] and [FS12]) have given quasipolynomial size hitting sets for this model. In this work, we give a much simpler construction of such hitting sets, using techniques of Shpilka and Volkovich [SV09].",
"This report outlines a more general formulation of sequential patterns, which uni es the generalized patterns proposed by Srikant and Agarwal [SA96] and episode discovery approach taken by [MTV97]. We show that just by varying the values of timing constraint parameters and counting methods, our formulation can be made identical to either one of these. Furthermore, our approach de nes several other counting methods which could be suitable for various applications. The algorithm used to discover these universal sequential patterns is based on a modi cation of the GSP algorithm proposed in [SA96]. Some of these modi cations are made to take care of the newly introduced timing constraints and pattern restrictions, whereas some modi cations are made for performance reasons. In the end, we present an application, which illustrates the de ciencies of current approaches that can be overcome by the proposed universal formulation.",
"In 1999 Kannan, Tet ali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklos, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklos, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.",
"A well known but incorrect piece of functional programming folklore is that ML expressions can be efficiently typed in polynomial time. In probing the truth of that folklore, various researchers, including Wand, Buneman, Kanellakis, and Mitchell, constructed simple counterexamples consisting of typable ML programs having length n , with principal types having O(2 cn ) distinct type variables and length O(2 2cn ). When the types associated with these ML constructions were represented as directed acyclic graphs, their sizes grew as O(2 cn ). The folklore was even more strongly contradicted by the recent result of Kanellakis and Mitchell that simply deciding whether or not an ML expression is typable is PSPACE-hard. We improve the latter result, showing that deciding ML typability is DEXPTIME-hard. As Kanellakis and Mitchell have shown containment in DEXPTIME, the problem is DEXPTIME-complete. The proof of DEXPTIME-hardness is carried out via a generic reduction: it consists of a very straightforward simulation of any deterministic one-tape Turing machine M with input k running in O ( c |k| ) time by a polynomial-sized ML formula P M,k , such that M accepts k iff P M,k is typable. The simulation of the transition function δ of the Turing Machine is realized uniquely through terms in the lambda calculus without the use of the polymorphic let construct. We use let for two purposes only: to generate an exponential amount of blank tape for the Turing Machine simulation to begin, and to compose an exponential number of applications of the ML formula simulating state transition. It is purely the expressive power of ML polymorphism to succinctly express function composition which results in a proof of DEXPTIME-hardness. We conjecture that lower bounds on deciding typability for extensions to the typed lambda calculus can be regarded precisely in terms of this expressive capacity for succinct function composition. To further understand this lower bound, we relate it to the problem of proving equality of type variables in a system of type equations generated from an ML expression with let-polymorphism. We show that given an oracle for solving this problem, deciding typability would be in PSPACE, as would be the actual computation of the principal type of the expression, were it indeed typable."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Shoulder surfing is a widely known attack in which the adversary tries to infer the victim's authentication secret by looking over his or her shoulder. There is a significant body of research into mitigating the impact of shoulder-surfing attacks. An in-depth survey conducted by @cite_8 considered the threat not only in the context of authentication, but also in the context of routine smartphone usage. The survey showed that 130 out of 174 participants indicated that shoulder-surfing attacks occurred on public transportation. Victims most commonly defended against such an attack by modifying their posture or cancelling the authentication. Furthermore, a study conducted by @cite_18 found the perceived risk of shoulder surfing to be high in only 11 of 3410 situations. This demonstrates that people are not actively defending themselves against shoulder surfing, and more work is needed to improve the shoulder-surfing resistance of authentication techniques. | {
"cite_N": [
"@cite_18",
"@cite_8"
],
"mid": [
"2611149039",
"93892664",
"2139094422",
"2795614125"
],
"abstract": [
"Research has brought forth a variety of authentication systems to mitigate observation attacks. However, there is little work about shoulder surfing situations in the real world. We present the results of a user survey (N=174) in which we investigate actual stories about shoulder surfing on mobile devices from both users and observers. Our analysis indicates that shoulder surfing mainly occurs in an opportunistic, non-malicious way. It usually does not have serious consequences, but evokes negative feelings for both parties, resulting in a variety of coping strategies. Observed data was personal in most cases and ranged from information about interests and hobbies to login data and intimate details about third persons and relationships. Thus, our work contributes evidence for shoulder surfing in the real world and informs implications for the design of privacy protection mechanisms.",
"Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. In this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others’ credentials then there is a high chance that they will be caught red handed. Comparative analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.",
"Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods.",
"We evaluate the efficacy of shoulder surfing defenses for PIN-based authentication systems. We find tilting the device away from the observer, a widely adopted defense strategy, provides limited protection. We also evaluate a recently proposed defense incorporating an \"invisible pressure component\" into PIN entry. Contrary to earlier claims, our results show this provides little defense against malicious insider attacks. Observations during the study uncover successful attacker strategies for reconstructing a victim's PIN when faced with a tilt defense. Our evaluations identify common misconceptions regarding shoulder surfing defenses, and highlight the need to educate users on how to safeguard their credentials from these attacks."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | PIN keypads and pattern locks are commonly used methods for phone authentication. Unfortunately, these techniques are vulnerable to smudge attacks, because the user leaves oily residues on the screen. Previous work has demonstrated that smudge attacks are especially effective on pattern locks as users drag their fingers over the screen. Smudge attacks can also be used to limit the input space for PIN locks. @cite_24 found that as long as the line of sight is not perpendicular, it is easy to observe entered patterns based on smudges. Under ideal conditions, 92 These results demonstrate that even if the adversary is not able to actively observe the process of authentication, he or she can still recover the password with considerable success. In our work, we leverage pre-touch information to limit the number of touches the user makes on the screen, mitigating the effect of smudge attacks. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2068548805",
"2055389916",
"1626992774",
"2538700141"
],
"abstract": [
"Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples.",
"Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern.",
"PIN unlock is a popular screen locking mechanism used for protecting the sensitive private information on smart-phones. However, it is susceptible to a number of attacks such as guessing attacks, shoulder surfing attacks, smudge attacks, and side-channel attacks. Scramble keypad changes the keypad layout in each PIN-entry process to improve the security of PIN unlock. In this paper, the security and usability of scramble keypad for PIN unlock are studied. Our security analysis shows that scramble keypad can defend smudge attacks perfectly and greatly reduce the threats of side-channel attacks. A user study is conducted to demonstrate that scramble keypad has a better chance to defend shoulder surfing attacks than standard keypad. We also investigate how the usability of scramble keypad is compromised for improved security through a user study."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Now that smartphones are commonplace, traditional authentication techniques have been adapted to work on the small touchscreens of smartphones. @cite_25 compared speed and shoulder-surfing resistance of a scrambled PIN entry keypad and a normal PIN entry keypad. They found that the scrambled keypad was slower but more resistant to shoulder surfing. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2395297283",
"2803549391",
"93892664",
"2055389916"
],
"abstract": [
"Traditional user authentication methods using passcode or finger movement on smartphones are vulnerable to shoulder surfing attack, smudge attack, and keylogger attack. These attacks are able to infer a passcode based on the information collection of user’s finger movement or tapping input. As an alternative user authentication approach, eye tracking can reduce the risk of suffering those attacks effectively because no hand input is required. However, most existing eye tracking techniques are designed for large screen devices. Many of them depend on special hardware like high resolution eye tracker and special process like calibration, which are not readily available for smartphone users. In this paper, we propose a new eye tracking method for user authentication on a smartphone. It utilizes the smartphone’s front camera to capture a user’s eye movement trajectories which are used as the input of user authentication. No special hardware or calibration process is needed. We develop a prototype and evaluate its effectiveness on an Android smartphone. We recruit a group of volunteers to participate in the user study. Our evaluation results show that the proposed eye tracking technique achieves very high accuracy in user authentication.",
"Abstract We present a new Personal Identification Number (PIN) entry method for smartphones that can be used in security-critical applications, such as smartphone banking. The proposed “Two-Thumbs-Up” (TTU) scheme is resilient against observation attacks such as shoulder-surfing and camera recording, and guides users to protect their PIN information from eavesdropping by shielding the challenge area on the touch screen. To demonstrate the feasibility of TTU, we conducted a user study for TTU, and compared it with existing authentication methods (Normal PIN, Black and White PIN, and ColorPIN) in terms of usability and security. The study results demonstrate that TTU is more secure than other PIN entry methods in the presence of an observer recording multiple authentication sessions.",
"Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. In this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others’ credentials then there is a high chance that they will be caught red handed. Comparative analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Several works have examined the possibility of augmenting PIN keypads with gestures. SwiPIN, by von @cite_6 , divided the PIN keypad into two sections. Each number in each section corresponded to a different swipe gesture direction. Performing a swipe gesture on the correct section of the screen would insert the corresponding number. Their study demonstrated that this technique improved resistance against smudge attacks. introduced ForcePINs'' @cite_11 , with which each PIN digit could be entered with different levels of finger pressure on the screen, to add an additional layer of challenge for shoulder surfers. However, results showed that there was no statistically significant difference in shoulder-surfing resistance between regular PINs and ForcePINs, because when users pressed harder, they also pressed for a noticeably longer time. | {
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2055389916",
"2139094422",
"1533974647",
"2036616308"
],
"abstract": [
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples.",
"Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods.",
"Recent researches have shown that motion sensors may be used as a side channel to infer keystrokes on the touchscreen of smartphones. However, the practicality of this attack is unclear. For example, does this attack work on different devices, screen dimensions, keyboard layouts, or keyboard types? Does this attack depend on specific users or is it user independent? To answer these questions, we conducted a user study where 21 participants typed a total of 47,814 keystrokes on four different mobile devices in six settings. Our results show that this attack remains effective even though the accuracy is affected by user habits, device dimension, screen orientation, and keyboard layout. On a number-only keyboard, after the attacker tries 81 4-digit PINs, the probability that she has guessed the correct PIN is 65 , which improves the accuracy rate of random guessing by 81 times. Our study also indicates that inference based on the gyroscope is more accurate than that based on the accelerometer. We evaluated two classification techniques in our prototype and found that they are similarly effective.",
"Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0 ; moreover, the system shows good security properties."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Other works have looked beyond purely visual representations of PINs by incorporating haptic and audio feedback. @cite_17 created an observation-resistant authentication technique by providing no visual clues to the user. The technique renders a wheel on the screen with identical sections. However, when users drag their fingers over the sections of the wheel, tactile feedback is presented with varying lengths and strengths. To select a section, users drag their fingers to the middle of the wheel. After each entry, the sections are shuffled to provide resistance against smudge attacks. Similarly, VibraInput @cite_1 used an on-screen, rotary wheel with two levels. The outer level contained the letters A through D, each corresponding to a fixed vibration pattern (that has to be remembered by user). The inner level corresponded to the PIN numbers 0 through 9. Upon starting PIN entry, the phone would vibrate the pattern of a letter. The user would then rotate the outer wheel to align the letter with the number to select on the inner wheel. By repeating this process, the technique could use process of elimination to ascertain the PIN number. The overall technique would repeat until the entire PIN was entered. | {
"cite_N": [
"@cite_1",
"@cite_17"
],
"mid": [
"2036616308",
"1973831058",
"2079024329",
"2055389916"
],
"abstract": [
"Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0 ; moreover, the system shows good security properties.",
"Today's smartphones provide services and uses that required a panoply of dedicated devices not so long ago. With them, we listen to music, play games or chat with our friends; but we also read our corporate email and documents, manage our online banking; and we have started to use them directly as a means of payment. In this paper, we aim to raise awareness of side-channel attacks even when strong isolation protects sensitive applications. Previous works have studied the use of the phone accelerometer and gyroscope as side channel data to infer PINs. Here, we describe a new side-channel attack that makes use of the video camera and microphone to infer PINs entered on a number-only soft keyboard on a smartphone. The microphone is used to detect touch events, while the camera is used to estimate the smartphone's orientation, and correlate it to the position of the digit tapped by the user. We present the design, implementation and early evaluation of PIN Skimmer, which has a mobile application and a server component. The mobile application collects touch-event orientation patterns and later uses learnt patterns to infer PINs entered in a sensitive application. When selecting from a test set of 50 4-digit PINs, PIN Skimmer correctly infers more than 30 of PINs after 2 attempts, and more than 50 of PINs after 5 attempts on android-powered Nexus S and Galaxy S3 phones. When selecting from a set of 200 8-digit PINs, PIN Skimmer correctly infers about 45 of the PINs after 5 attempts and 60 after 10 attempts. It turns out to be difficult to prevent such side-channel attacks, so we provide guidelines for developers to mitigate present and future side-channel attacks on PIN input.",
"Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same - the user's hand movement. In our experiments ZEBRA performed continuous authentication with 85 accuracy in verifying the correct user and identified all adversaries within 11s. For a different threshold that trades security for usability, ZEBRA correctly verified 90 of users and identified all adversaries within 50s.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Two-Thumbs-Up (TTU) @cite_9 prevents shoulder-surfing attacks by requiring the user to cover the screen with their hands. This forms a handshield'' and enters a challenge mode. If users move their hands away from the screen, the authentication technique disappears. TTU randomly associates five response'' letters with two digits each, presenting the digits and letters on either side of the screen. The user then has to tap on the letter corresponding to the next PIN digit. After a certain number (dependent on PIN length) of correctly selected letters, the authentication process is complete. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2803549391",
"93892664",
"2055389916",
"2139094422"
],
"abstract": [
"Abstract We present a new Personal Identification Number (PIN) entry method for smartphones that can be used in security-critical applications, such as smartphone banking. The proposed “Two-Thumbs-Up” (TTU) scheme is resilient against observation attacks such as shoulder-surfing and camera recording, and guides users to protect their PIN information from eavesdropping by shielding the challenge area on the touch screen. To demonstrate the feasibility of TTU, we conducted a user study for TTU, and compared it with existing authentication methods (Normal PIN, Black and White PIN, and ColorPIN) in terms of usability and security. The study results demonstrate that TTU is more secure than other PIN entry methods in the presence of an observer recording multiple authentication sessions.",
"Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. In this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others’ credentials then there is a high chance that they will be caught red handed. Comparative analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples.",
"Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input. With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Harbach at al. @cite_23 focused on comparing PIN locks and pattern locks. They were able to observe the behaviour of 134 smartphone users over one month, revealing differences between the two techniques. Results showed that although pattern locks are faster, users are six times as likely to make mistakes compared to PIN locks. When including failed attempts, there were no differences in authentication time between the two techniques. When a user made a mistake entering a PIN or pattern, subsequent successful attempts took more time, presumably because the user took more care when repeating the authentication. Visual feedback did not influence the error rate nor the entry time. Similarly, our 3D Pattern technique improves shoulder-surfing resistance by reducing visual feedback during authentication. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2315247372",
"2055389916",
"1973831058",
"2181155974"
],
"abstract": [
"To prevent unauthorized parties from accessing data stored on their smartphones, users have the option of enabling a \"lock screen\" that requires a secret code (e.g., PIN, drawing a pattern, or biometric) to gain access to their devices. We present a detailed analysis of the smartphone locking mechanisms currently available to billions of smartphone users worldwide. Through a month-long field study, we logged events from a panel of users with instrumented smartphones (N=134). We are able to show how existing lock screen mechanisms provide users with distinct tradeoffs between usability (unlocking speed vs. unlocking frequency) and security. We find that PIN users take longer to enter their codes, but commit fewer errors than pattern users, who unlock more frequently and are very prone to errors. Overall, PIN and pattern users spent the same amount of time unlocking their devices on average. Additionally, unlock performance seemed unaffected for users enabling the stealth mode for patterns. Based on our results, we identify areas where device locking mechanisms can be improved to result in fewer human errors -- increasing usability -- while also maintaining security.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples.",
"Today's smartphones provide services and uses that required a panoply of dedicated devices not so long ago. With them, we listen to music, play games or chat with our friends; but we also read our corporate email and documents, manage our online banking; and we have started to use them directly as a means of payment. In this paper, we aim to raise awareness of side-channel attacks even when strong isolation protects sensitive applications. Previous works have studied the use of the phone accelerometer and gyroscope as side channel data to infer PINs. Here, we describe a new side-channel attack that makes use of the video camera and microphone to infer PINs entered on a number-only soft keyboard on a smartphone. The microphone is used to detect touch events, while the camera is used to estimate the smartphone's orientation, and correlate it to the position of the digit tapped by the user. We present the design, implementation and early evaluation of PIN Skimmer, which has a mobile application and a server component. The mobile application collects touch-event orientation patterns and later uses learnt patterns to infer PINs entered in a sensitive application. When selecting from a test set of 50 4-digit PINs, PIN Skimmer correctly infers more than 30 of PINs after 2 attempts, and more than 50 of PINs after 5 attempts on android-powered Nexus S and Galaxy S3 phones. When selecting from a set of 200 8-digit PINs, PIN Skimmer correctly infers about 45 of the PINs after 5 attempts and 60 after 10 attempts. It turns out to be difficult to prevent such side-channel attacks, so we provide guidelines for developers to mitigate present and future side-channel attacks on PIN input.",
"A lot of research is being conducted into improving the usability and security of phone-unlocking. There is however a severe lack of scientic data on users’ current unlocking behavior and perceptions. We performed an online survey (n = 260) and a one-month eld study ( n = 52) to gain insights into real world (un)locking behavior of smartphone users. One of the main goals was to nd out how much overhead unlocking and authenticating adds to the overall phone usage and in how many unlock interactions security (i.e. authentication) was perceived as necessary. We also investigated why users do or do not use a lock screen and how they cope with smartphone-related risks, such as shouldersurng or unwanted accesses. Among other results, we found that on average, participants spent around 2.9 of their smartphone interaction time with authenticating (9 in the worst case). Participants that used a secure lock screen like PIN or Android unlock patterns considered it unnecessary in 24.1 of situations. Shoulder surng was perceived to be a relevant risk in only 11 of 3410 sampled situations."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Another category of PIN entry techniques uses pictures or other graphics. In SemanticLock @cite_10 , users arrange icons on the screen in a memorable way. The user is authenticated based on correct placement of the icons. In a similar work, Awase-E @cite_5 , Takada and Koike leverage photos taken on a user's smartphone. The lock screen breaks a user-chosen photograph up into smaller chunks, and shows nine chunks of various photographs all at once. The user then has to select the tile from the correct photograph four times in a row to unlock the phone. | {
"cite_N": [
"@cite_5",
"@cite_10"
],
"mid": [
"2809689775",
"1973831058",
"2315247372",
"2046378201"
],
"abstract": [
"We introduce SemanticLock, a single factor graphical authentication solution for mobile devices. SemanticLock uses a set of graphical images as password tokens that construct a semantically memorable story representing the user s password. A familiar and quick action of dragging or dropping the images into their respective positions either in a or in movements on the the touchscreen is what is required to use our solution. The authentication strength of the SemanticLock is based on the large number of possible semantic constructs derived from the positioning of the image tokens and the type of images selected. Semantic Lock has a high resistance to smudge attacks and it equally exhibits a higher level of memorability due to its graphical paradigm. In a three weeks user study with 21 participants comparing SemanticLock against other authentication systems, we discovered that SemanticLock outperformed the PIN and matched the PATTERN both on speed, memorability, user acceptance and usability. Furthermore, qualitative test also show that SemanticLock was rated more superior in like-ability. SemanticLock was also evaluated while participants walked unencumbered and walked encumbered carrying \"everyday\" items to analyze the effects of such activities on its usage.",
"Today's smartphones provide services and uses that required a panoply of dedicated devices not so long ago. With them, we listen to music, play games or chat with our friends; but we also read our corporate email and documents, manage our online banking; and we have started to use them directly as a means of payment. In this paper, we aim to raise awareness of side-channel attacks even when strong isolation protects sensitive applications. Previous works have studied the use of the phone accelerometer and gyroscope as side channel data to infer PINs. Here, we describe a new side-channel attack that makes use of the video camera and microphone to infer PINs entered on a number-only soft keyboard on a smartphone. The microphone is used to detect touch events, while the camera is used to estimate the smartphone's orientation, and correlate it to the position of the digit tapped by the user. We present the design, implementation and early evaluation of PIN Skimmer, which has a mobile application and a server component. The mobile application collects touch-event orientation patterns and later uses learnt patterns to infer PINs entered in a sensitive application. When selecting from a test set of 50 4-digit PINs, PIN Skimmer correctly infers more than 30 of PINs after 2 attempts, and more than 50 of PINs after 5 attempts on android-powered Nexus S and Galaxy S3 phones. When selecting from a set of 200 8-digit PINs, PIN Skimmer correctly infers about 45 of the PINs after 5 attempts and 60 after 10 attempts. It turns out to be difficult to prevent such side-channel attacks, so we provide guidelines for developers to mitigate present and future side-channel attacks on PIN input.",
"To prevent unauthorized parties from accessing data stored on their smartphones, users have the option of enabling a \"lock screen\" that requires a secret code (e.g., PIN, drawing a pattern, or biometric) to gain access to their devices. We present a detailed analysis of the smartphone locking mechanisms currently available to billions of smartphone users worldwide. Through a month-long field study, we logged events from a panel of users with instrumented smartphones (N=134). We are able to show how existing lock screen mechanisms provide users with distinct tradeoffs between usability (unlocking speed vs. unlocking frequency) and security. We find that PIN users take longer to enter their codes, but commit fewer errors than pattern users, who unlock more frequently and are very prone to errors. Overall, PIN and pattern users spent the same amount of time unlocking their devices on average. Additionally, unlock performance seemed unaffected for users enabling the stealth mode for patterns. Based on our results, we identify areas where device locking mechanisms can be improved to result in fewer human errors -- increasing usability -- while also maintaining security.",
"In this paper, we propose and evaluate Use Your Illusion, a novel mechanism for user authentication that is secure and usable regardless of the size of the device on which it is used. Our system relies on the human ability to recognize a degraded version of a previously seen image. We illustrate how distorted images can be used to maintain the usability of graphical password schemes while making them more resilient to social engineering or observation attacks. Because it is difficult to mentally \"revert\" a degraded image, without knowledge of the original image, our scheme provides a strong line of defense against impostor access, while preserving the desirable memorability properties of graphical password schemes. Using low-fidelity tests to aid in the design, we implement prototypes of Use Your Illusion as i) an Ajax-based web service and ii) on Nokia N70 cellular phones. We conduct a between-subjects usability study of the cellular phone prototype with a total of 99 participants in two experiments. We demonstrate that, regardless of their age or gender, users are very skilled at recognizing degraded versions of self-chosen images, even on small displays and after time periods of one month. Our results indicate that graphical passwords with distorted images can achieve equivalent error rates to those using traditional images, but only when the original image is known."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | There is considerable research exploring whether or not lock screens are even necessary at all, by applying , also known as implicit authentication. Continuous authentication systems analyze an individual's regular patterns of touches on the screen, and build a model. A different user would have different patterns, and could be denied access by the system. With the Touchalytics project @cite_0 , were able to use continuous authentication to identify the user with an error rate below 4 , @cite_19 showed that an attacker, merely watching a video of the target using their phone, could bypass swipe-based continuous authentication at least 75 | {
"cite_N": [
"@cite_0",
"@cite_19"
],
"mid": [
"2151854612",
"2102932275",
"1964160137",
"2055389916"
],
"abstract": [
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.",
"Current smartphones generally cannot continuously authenticate users during runtime. This poses severe security and privacy threats: A malicious user can manipulate the phone if bypassing the screen lock. To solve this problem, our work adopts a continuous and passive authentication mechanism based on a user’s touch operations on the touchscreen. Such a mechanism is suitable for smartphones, as it requires no extra hardware or intrusive user interface. We study how to model multiple types of touch data and perform continuous authentication accordingly. As a first attempt, we also investigate the fundamentals of touch operations as biometrics by justifying their distinctiveness and permanence. A onemonth experiment is conducted involving over 30 users. Our experiment results verify that touch biometrics can serve as a promising method for continuous and passive authentication.",
"Touch-based verification --- the use of touch gestures (e.g., swiping, zooming, etc.) to authenticate users of touch screen devices --- has recently been widely evaluated for its potential to serve as a second layer of defense to the PIN lock mechanism. In all performance evaluations of touch-based authentication systems however, researchers have assumed naive (zero-effort) forgeries in which the attacker makes no effort to mimic a given gesture pattern. In this paper we demonstrate that a simple \"Lego\" robot driven by input gleaned from general population swiping statistics can generate forgeries that achieve alarmingly high penetration rates against touch-based authentication systems. Using the best classification algorithms in touch-based authentication, we rigorously explore the effect of the attack, finding that it increases the Equal Error Rates of the classifiers by between 339 and 1004 depending on parameters such as the failure-to-enroll threshold and the type of touch stroke generated by the robot. The paper calls into question the zero-effort impostor testing approach used to benchmark the performance of touch-based authentication systems.",
"With the rich functionalities and enhanced computing capabilities available on mobile computing devices with touch screens, users not only store sensitive information (such as credit card numbers) but also use privacy sensitive applications (such as online banking) on these devices, which make them hot targets for hackers and thieves. To protect private information, such devices typically lock themselves after a few minutes of inactivity and prompt a password PIN pattern screen when reactivated. Passwords PINs patterns based schemes are inherently vulnerable to shoulder surfing attacks and smudge attacks. Furthermore, passwords PINs patterns are inconvenient for users to enter frequently. In this paper, we propose GEAT, a gesture based user authentication scheme for the secure unlocking of touch screen devices. Unlike existing authentication schemes for touch screen devices, which use what user inputs as the authentication secret, GEAT authenticates users mainly based on how they input, using distinguishing features such as finger velocity, device acceleration, and stroke time. Even if attackers see what gesture a user performs, they cannot reproduce the behavior of the user doing gestures through shoulder surfing or smudge attacks. We implemented GEAT on Samsung Focus running Windows, collected 15009 gesture samples from 50 volunteers, and conducted real-world experiments to evaluate GEAT's performance. Experimental results show that our scheme achieves an average equal error rate of 0.5 with 3 gestures using only 25 training samples."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Some work has explored applying the principles of continuous authentication to augment traditional lock screen techniques. @cite_12 use spatial touch features in addition to previously used temporal touch features on keyboards to verify users based on their individual text entry behaviours. Examples of spatial touch features include touch offsets, angles, and pressures. By incorporating such spatial features, user recognition accuracy was improved. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2064376060",
"2151854612",
"1964160137",
"2079024329"
],
"abstract": [
"Authentication methods can be improved by considering implicit, individual behavioural cues. In particular, verifying users based on typing behaviour has been widely studied with physical keyboards. On mobile touchscreens, the same concepts have been applied with little adaptations so far. This paper presents the first reported study on mobile keystroke biometrics which compares touch-specific features between three different hand postures and evaluation schemes. Based on 20.160 password entries from a study with 28 participants over two weeks, we show that including spatial touch features reduces implicit authentication equal error rates (EER) by 26.4 - 36.8 relative to the previously used temporal features. We also show that authentication works better for some hand postures than others. To improve applicability and usability, we further quantify the influence of common evaluation assumptions: known attacker data, training and testing on data from a single typing session, and fixed hand postures. We show that these practices can lead to overly optimistic evaluations. In consequence, we describe evaluation recommendations, a probabilistic framework to handle unknown hand postures, and ideas for further improvements.",
"We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0 for intrasession authentication, 2 -3 for intersession authentication, and below 4 when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.",
"Touch-based verification --- the use of touch gestures (e.g., swiping, zooming, etc.) to authenticate users of touch screen devices --- has recently been widely evaluated for its potential to serve as a second layer of defense to the PIN lock mechanism. In all performance evaluations of touch-based authentication systems however, researchers have assumed naive (zero-effort) forgeries in which the attacker makes no effort to mimic a given gesture pattern. In this paper we demonstrate that a simple \"Lego\" robot driven by input gleaned from general population swiping statistics can generate forgeries that achieve alarmingly high penetration rates against touch-based authentication systems. Using the best classification algorithms in touch-based authentication, we rigorously explore the effect of the attack, finding that it increases the Equal Error Rates of the classifiers by between 339 and 1004 depending on parameters such as the failure-to-enroll threshold and the type of touch stroke generated by the robot. The paper calls into question the zero-effort impostor testing approach used to benchmark the performance of touch-based authentication systems.",
"Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same - the user's hand movement. In our experiments ZEBRA performed continuous authentication with 85 accuracy in verifying the correct user and identified all adversaries within 11s. For a different threshold that trades security for usability, ZEBRA correctly verified 90 of users and identified all adversaries within 50s."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Many recent works on touchscreen interactions have started exploring pre-touch information; that is, positional information about the user's hands or fingers before making contact with the screen. For example, with TouchCuts and TouchZoom, @cite_21 used pre-touch finger distance to expand nearby targets on screen, facilitating easier target selection. This general approach has not yet been explored in the context of authentication techniques resistant to shoulder surfing. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2397886250",
"2136328445",
"2004403975",
"2078073494"
],
"abstract": [
"Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an \"ad-lib interface\" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.",
"Although touch-screen laptops are increasing in popularity, users still do not comfortably rely on touch in these environments, as current software interfaces were not designed for being used by the finger. In this paper, we first demonstrate the benefits of using touch as a complementary input modality along with the keyboard and mouse or touchpad in a laptop setting. To alleviate the frustration users experience with touch, we then design two techniques, TouchCuts, a single target expansion technique, and ,i>TouchZoom, i>, a multiple target expansion technique. Both techniques facilitate the selection of small icons, by detecting the finger proximity above the display surface, and expanding the target as the finger approaches. In a controlled evaluation, we show that our techniques improve performance in comparison to both the computer mouse and a baseline touch-based target acquisition technique. We conclude by discussing other application scenarios that our techniques support.",
"We present the design and evaluation of TapTap and MagStick, two thumb interaction techniques for target acquisition on mobile devices with small touch-screens. These two techniques address all the issues raised by the selection of targets with the thumb on small tactile screens: screen accessibility, visual occlusion and accuracy. A controlled experiment shows that TapTap and MagStick allow the selection of targets in all areas of the screen in a fast and accurate way. They were found to be faster than four previous techniques except Direct Touch which, although faster, is too error prone. They also provided the best error rate of all tested techniques. Finally the paper also provides a comprehensive study of various techniques for thumb based touch-screen target selection.",
"A method of reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed. A corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown. The model is validated for target distances >= 25.5cm, and demonstrated to have a mean accuracy of 1.05cm 128ms before the user touches the screen. Preference study of different levels of latency reveals a strong preference for unperceived latency touchdown feedback. A form of 'soft' feedback, as well as other uses for this prediction to improve performance, is proposed."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | Another common application of pre-touch information is for reducing the perceived latency of touchscreen interactions. employed this approach for tabletop displays @cite_4 , achieving a touch location prediction error of about 1 ,cm. The approach was implemented by tracking the user's index finger location using motion capture with fiducial markers, which are small retro-reflective spheres that can be precisely tracked by IR cameras. In the prototype of our 3D Pattern technique, we also use a motion capture system for finger position tracking. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2078073494",
"2397886250",
"2058352122",
"2513258067"
],
"abstract": [
"A method of reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed. A corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown. The model is validated for target distances >= 25.5cm, and demonstrated to have a mean accuracy of 1.05cm 128ms before the user touches the screen. Preference study of different levels of latency reveals a strong preference for unperceived latency touchdown feedback. A form of 'soft' feedback, as well as other uses for this prediction to improve performance, is proposed.",
"Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an \"ad-lib interface\" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.",
"This paper presents a technique for natural, fingertip-based interaction with virtual objects in Augmented Reality (AR) environments. We use image processing software and finger- and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enable the user to feel virtual objects. Unlike previous AR interfaces, this approach allows users to interact with virtual content using natural hand gestures. The paper describes how these techniques were applied in an urban planning interface, and also presents preliminary informal usability results.",
"With the heated trend of augmented reality (AR) and popularity of smart head-mounted devices, the development of natural human device interaction is important, especially the hand gesture based interaction. This paper presents a solution for the point gesture based interaction in the egocentric vision and its application. Firstly, a dataset named EgoFinger is established focusing on the pointing gesture for the egocentric vision. We discuss the dataset collection detail and as well the comprehensive analysis of this dataset, including background and foreground color distribution, hand occurrence likelihood, scale and pointing angle distribution of hand and finger, and the manual labeling error analysis. The analysis shows that the dataset covers substantial data samples in various environments and dynamic hand shapes. Furthermore, we propose a two-stage Faster R-CNN based hand detection and dual-target fingertip detection framework. Comparing with state-of-art tracking and detection algorithm, it performs the best in both hand and fingertip detection. With the large-scale dataset, we achieve fingertip detection error at about 12.22 pixels in 640px × 480px video frame. Finally, using the fingertip detection result, we design and implement an input system for the egocentric vision, i.e., Ego-Air-Writing. By considering the fingertip as a pen, the user with wearable glass can write character in the air and interact with system using simple hand gestures."
]
} |
1908.09165 | 2969385932 | Smartphones store a significant amount of personal and private information, and are playing an increasingly important role in people's lives. It is important for authentication techniques to be more resistant against two known attacks called shoulder surfing and smudge attacks. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of a new input paradigm called pre-touch, which could soon allow smartphones to sense a user's finger position at some distance from the screen. We implement the technique and evaluate it in a pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes about 8 seconds to authenticate, it is immune to smudge attacks and promises to be more resistant to shoulder surfing. | We anticipate pre-touch sensing to become available on commodity smartphones in the near future. In 2016, @cite_2 explored how a smartphone with a self-capacitance touchscreen could enable pre-touch information to be sensed, and applied this information in various smartphone applications. We envision that our pre-touch PIN entry techniques will be able to be used on smartphones without additional motion tracking hardware. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2397886250",
"2090465075",
"1980481605",
"2402069821"
],
"abstract": [
"Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an \"ad-lib interface\" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.",
"Today's smartphones are shipped with various embedded motion sensors, such as the accelerometer, gyroscope, and orientation sensors. These motion sensors are useful in supporting the mobile UI innovation and motion-based commands. However, they also bring potential risks of leaking user's private information as they allow third party applications to monitor the motion changes of smartphones. In this paper, we study the feasibility of inferring a user's tap inputs to a smartphone with its integrated motion sensors. Specifically, we utilize an installed trojan application to stealthily monitor the movement and gesture changes of a smartphone using its on-board motion sensors. When the user is interacting with the trojan application, it learns the motion change patterns of tap events. Later, when the user is performing sensitive inputs, such as entering passwords on the touchscreen, the trojan application applies the learnt pattern to infer the occurrence of tap events on the touchscreen as well as the tapped positions on the touchscreen. For demonstration, we present the design and implementation of TapLogger, a trojan application for the Android platform, which stealthily logs the password of screen lock and the numbers entered during a phone call (e.g., credit card and PIN numbers). Statistical results are presented to show the feasibility of such inferences and attacks.",
"The rapid deployment of sensing technology in smartphones and the explosion of their usage in people's daily lives provide users with the ability to collectively sense the world. This leads to a growing trend of mobile healthcare systems utilizing sensing data collected from smartphones with without additional external sensors to analyze and understand people's physical and mental states. However, such healthcare systems are vulnerable to user spoofing attacks, in which an adversary distributes his registered device to other users such that data collected from these users can be claimed as his own to obtain more healthcare benefits and undermine the successful operation of mobile healthcare systems. Existing mitigation approaches either only rely on a secret PIN number (which can not deal with colluded attacks) or require an explicit user action for verification. In this paper, we propose a user verification scheme leveraging unique gait patterns derived from acceleration readings in mobile healthcare systems to detect possible user spoofing attacks. Our framework exploits the readily available accelerometers embedded within smartphones for user verification. Specifically, our user spoofing attack mitigation scheme (which consists of three components, namely Step Cycle Identification, Step Cycle Interpolation, and Similarity Score Computation) is used to extract gait patterns from run-time accelerometer measurements to perform robust user verification under various walking speeds. Our experiments using 322 smartphone-based traces over a period of 6 months confirm that our scheme is highly effective for detecting user spoofing attacks. This strongly indicates the feasibility of using smartphone based low grade accelerometer to conduct gait recognition and facilitate effective user verification without active user cooperation.",
"Previous work on muscle activity sensing has leveraged specialized sensors such as electromyography and force sensitive resistors. While these sensors show great potential for detecting finger hand gestures, they require additional hardware that adds to the cost and user discomfort. Past research has utilized sensors on commercial devices, focusing on recognizing gross hand gestures. In this work we present Serendipity, a new technique for recognizing unremarkable and fine-motor finger gestures using integrated motion sensors (accelerometer and gyroscope) in off-the-shelf smartwatches. Our system demonstrates the potential to distinguish 5 fine-motor gestures like pinching, tapping and rubbing fingers with an average f1-score of 87 . Our work is the first to explore the feasibility of using solely motion sensors on everyday wearable devices to detect fine-grained gestures. This promising technology can be deployed today on current smartwatches and has the potential to be applied to cross-device interactions, or as a tool for research in fields involving finger and hand motion."
]
} |
1908.09340 | 2969469636 | This paper mainly studies one-example and few-example video person re-identification. A multi-branch network PAM that jointly learns local and global features is proposed. PAM has high accuracy, few parameters and converges fast, which is suitable for few-example person re-identification. We iteratively estimates labels for unlabeled samples, incorporates them into training sets, and trains a more robust network. We propose the static relative distance sampling(SRD) strategy based on the relative distance between classes. For the problem that SRD can not use all unlabeled samples, we propose adaptive relative distance sampling (ARD) strategy. For one-example setting, We get 89.78 , 56.13 rank-1 accuracy on PRID2011 and iLIDS-VID respectively, and 85.16 , 45.36 mAP on DukeMTMC and MARS respectively, which exceeds the previous methods by large margin. | In the first type, @cite_8 propose a framework for solving the problem of one-shot classification. They first build a fully convolutional siamese network based on verification loss, and then use this network to calculate the similarity between the image to be identified and other labeled samples. The image is then recognized as a sample of the category which the most similar labeled sample belongs to. @cite_1 propose matching network. During the training process, some samples are selected to form a support set and the remaining samples are used as training images. They construct different encoders for the support set and training pictures. The classfier's output is a weighted sum of the predicted values between the support set and the training images. During the test process, one-shot sample are used as support set to predict the category of new images. @cite_14 use meta-learning methods to learn multiple similar tasks, and build two encoders for the gallery and probe respectively. Based on these encoders, they get gallery images' embedding according to the characteristics of the remaining gallery images. They get probe images' embedding according to the characteristics of the gallery images. In this way they obtain a more discriminative feature representation. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_8"
],
"mid": [
"2737691244",
"2883311563",
"2228002889",
"2526782364"
],
"abstract": [
"In this paper, we study the problem of training large-scale face identification model with imbalanced training data. This problem naturally exists in many real scenarios including large-scale celebrity recognition, movie actor annotation, etc. Our solution contains two components. First, we build a face feature extraction model, and improve its performance, especially for the persons with very limited training samples, by introducing a regularizer to the cross entropy loss for the multi-nomial logistic regression (MLR) learning. This regularizer encourages the directions of the face features from the same class to be close to the direction of their corresponding classification weight vector in the logistic regression. Second, we build a multi-class classifier using MLR on top of the learned face feature extraction model. Since the standard MLR has poor generalization capability for the one-shot classes even if these classes have been oversampled, we propose a novel supervision signal called underrepresented-classes promotion loss, which aligns the norms of the weight vectors of the one-shot classes (a.k.a. underrepresented-classes) to those of the normal classes. In addition to the original cross entropy loss, this new loss term effectively promotes the underrepresented classes in the learned model and leads to a remarkable improvement in face recognition performance. We test our solution on the MS-Celeb-1M low-shot learning benchmark task. Our solution recognizes 94.89 of the test images at the precision of 99 for the one-shot classes. To the best of our knowledge, this is the best performance among all the published methods using this benchmark task with the same setup, including all the participants in the recent MS-Celeb-1M challenge at ICCV 2017.",
"Matching images and sentences demands a fine understanding of both modalities. In this paper, we propose a new system to discriminatively embed the image and text to a shared visual-textual space. In this field, most existing works apply the ranking loss to pull the positive image text pairs close and push the negative pairs apart from each other. However, directly deploying the ranking loss is hard for network learning, since it starts from the two heterogeneous features to build inter-modal relationship. To address this problem, we propose the instance loss which explicitly considers the intra-modal data distribution. It is based on an unsupervised assumption that each image text group can be viewed as a class. So the network can learn the fine granularity from every image text group. The experiment shows that the instance loss offers better weight initialization for the ranking loss, so that more discriminative embeddings can be learned. Besides, existing works usually apply the off-the-shelf features, i.e., word2vec and fixed visual feature. So in a minor contribution, this paper constructs an end-to-end dual-path convolutional network to learn the image and text representations. End-to-end learning allows the system to directly learn from the data and fully utilize the supervision. On two generic retrieval datasets (Flickr30k and MSCOCO), experiments demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language based person retrieval, we improve the state of the art by a large margin. The code has been made publicly available.",
"This paper introduces a new approach to address the person re-identification problem in cameras with non-overlapping fields of view. Unlike previous approaches that learn Mahalanobis-like distance metrics in some transformed feature space, we propose to learn a dictionary that is capable of discriminatively and sparsely encoding features representing different people. Our approach directly addresses two key challenges in person re-identification: viewpoint variations and discriminability. First, to tackle viewpoint and associated appearance changes, we learn a single dictionary to represent both gallery and probe images in the training phase. We then discriminatively train the dictionary by enforcing explicit constraints on the associated sparse representations of the feature vectors. In the testing phase, we re-identify a probe image by simply determining the gallery image that has the closest sparse representation to that of the probe image in the Euclidean sense. Extensive performance evaluations on three publicly available multi-shot re-identification datasets demonstrate the advantages of our algorithm over several state-of-the-art dictionary learning, temporal sequence matching, and spatial appearance and metric learning based techniques.",
"Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method."
]
} |
1908.09340 | 2969469636 | This paper mainly studies one-example and few-example video person re-identification. A multi-branch network PAM that jointly learns local and global features is proposed. PAM has high accuracy, few parameters and converges fast, which is suitable for few-example person re-identification. We iteratively estimates labels for unlabeled samples, incorporates them into training sets, and trains a more robust network. We propose the static relative distance sampling(SRD) strategy based on the relative distance between classes. For the problem that SRD can not use all unlabeled samples, we propose adaptive relative distance sampling (ARD) strategy. For one-example setting, We get 89.78 , 56.13 rank-1 accuracy on PRID2011 and iLIDS-VID respectively, and 85.16 , 45.36 mAP on DukeMTMC and MARS respectively, which exceeds the previous methods by large margin. | In the second type, @cite_9 establish a graph for each camera. They view the labeled sample as the node of the graph, and view the distance between the video sequence features as the path. Unlabeled sample are mapped into different graphs (namely estimating the labels) to minimize the objective function. The graphs are updated dynamically . They continually estimate labels, and train models until the algorithm converges. @cite_10 first initialize the model with labeled samples. Then they calculate k nearest neighbors of the probe with the gallery. They remove the suspect samples and then add the remaining samples to the training set. The procedure is iterated until the algorithm converges. @cite_2 initialize a CNN with labeled data firstly, and then linearly incorporate pseudo-label samples to the training set according to the distance to labeled samples. Then the CNN is retrained with the new training set. Finally all unlabeled samples have estimated label and are added into training set, then they use a validation set to select the best model. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_2"
],
"mid": [
"2799185441",
"2949257576",
"2585635281",
"2739492061"
],
"abstract": [
"We focus on the one-shot learning for video-based person re-Identification (re-ID). Unlabeled tracklets for the person re-ID tasks can be easily obtained by preprocessing, such as pedestrian detection and tracking. In this paper, we propose an approach to exploiting unlabeled tracklets by gradually but steadily improving the discriminative capability of the Convolutional Neural Network (CNN) feature representation via stepwise learning. We first initialize a CNN model using one labeled tracklet for each identity. Then we update the CNN model by the following two steps iteratively: 1. sample a few candidates with most reliable pseudo labels from unlabeled tracklets; 2. update the CNN model according to the selected data. Instead of the static sampling strategy applied in existing works, we propose a progressive sampling method to increase the number of the selected pseudo-labeled candidates step by step. We systematically investigate the way how we should select pseudo-labeled tracklets into the training set to make the best use of them. Notably, the rank-1 accuracy of our method outperforms the state-of-the-art method by 21.46 points (absolute, i.e., 62.67 vs. 41.21 ) on the MARS dataset, and 16.53 points on the DukeMTMC-VideoReID dataset1.",
"The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL",
"The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN.",
"We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods."
]
} |
1908.09072 | 2969789183 | Accurate camera pose estimation result is essential for visual SLAM (VSLAM). This paper presents a novel pose correction method to improve the accuracy of the VSLAM system. Firstly, the relationship between the camera pose estimation error and bias values of map points is derived based on the optimized function in VSLAM. Secondly, the bias value of the map point is calculated by a statistical method. Finally, the camera pose estimation error is compensated according to the first derived relationship. After the pose correction, procedures of the original system, such as the bundle adjustment (BA) optimization, can be executed as before. Compared with existing methods, our algorithm is compact and effective and can be easily generalized to different VSLAM systems. Additionally, the robustness to system noise of our method is better than feature selection methods, due to all original system information is preserved in our algorithm while only a subset is employed in the latter. Experimental results on benchmark datasets show that our approach leads to considerable improvements over state-of-the-art algorithms for absolute pose estimation. | A monocular SLAM system, which leverages structural regularity in Manhattan world and contains three optimization strategies is proposed in @cite_17 . However, to reduce the estimation error of the rotation motion, multiple orthogonal planes must be visible throughout the entire motion estimation process. Unlike only using planes in @cite_17 , the rotation motion is estimated by joint lines and planes in @cite_28 . Once the rotation is found, the translational motion can be recovered by minimizing the de-rotated reprojection error. In @cite_24 , the accuracy of BA optimization is enhanced by incorporating feature scale constraints into it. Structural constraints between nearby planes (e.g. right angle) are added in the SLAM system to further recover the drift and distortion in @cite_2 . Since the structural regularity does not exist in all environments, the application scope of this category is limited. | {
"cite_N": [
"@cite_28",
"@cite_2",
"@cite_24",
"@cite_17"
],
"mid": [
"2889958683",
"169439271",
"2165220145",
"2101648351"
],
"abstract": [
"The structural features in Manhattan world encode useful geometric information of parallelism, orthogonality and or coplanarity in the scene. By fully exploiting these structural features, we propose a novel monocular SLAM system which provides accurate estimation of camera poses and 3D map. The foremost contribution of the proposed system is a structural feature-based optimization module which contains three novel optimization strategies. First, a rotation optimization strategy using the parallelism and orthogonality of 3D lines is presented. We propose a global binding method to compute an accurate estimation of the absolute rotation of the camera. Then we propose an approach for calculating the relative rotation to further refine the absolute rotation. Second, a translation optimization strategy leveraging coplanarity is proposed. Coplanar features are effectively identified, and we leverage them by a unified model handling both points and lines to calculate the relative translation, and then the optimal absolute translation. Third, a 3D line optimization strategy utilizing parallelism, orthogonality and coplanarity simultaneously is proposed to obtain an accurate 3D map consisting of structural line segments with low computational complexity. Experiments in man-made environments have demonstrated that the proposed system outperforms existing state-of-the-art monocular SLAM systems in terms of accuracy and robustness.",
"State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"Recent work has demonstrated the benefits of adopting a fully probabilistic SLAM approach in sequential motion and structure estimation from an image sequence. Unlike standard Structure from Motion (SFM) methods, this 'monocular SLAM' approach is able to achieve drift-free estimation with high frame-rate real-time operation, particularly benefitting from highly efficient active feature search, map management and mismatch rejection. A consistent thread in this research on real-time monocular SLAM has been to reduce the assumptions required. In this paper we move towards the logical conclusion of this direction by implementing a fully Bayesian Interacting Multiple Models (IMM) framework which can switch automatically between parameter sets in a dimensionless formulation of monocular SLAM. Remarkably, our approach of full sequential probability propagation means that there is no need for penalty terms to achieve the Occam property of favouring simpler models - this arises automatically. We successfully tackle the known stiffness in on-the-fly monocular SLAM start up without known patterns in the scene. The search regions for matches are also reduced in size with respect to single model EKF increasing the rejection of spurious matches. We demonstrate our method with results on a complex real image sequence with varied motion.",
"In this paper, we describe a system that can carry out simultaneous localization and mapping (SLAM) in large indoor and outdoor environments using a stereo pair moving with 6 DOF as the only sensor. Unlike current visual SLAM systems that use either bearing-only monocular information or 3-D stereo information, our system accommodates both monocular and stereo. Textured point features are extracted from the images and stored as 3-D points if seen in both images with sufficient disparity, or stored as inverse depth points otherwise. This allows the system to map both near and far features: the first provide distance and orientation, and the second provide orientation information. Unlike other vision-only SLAM systems, stereo does not suffer from ldquoscale driftrdquo because of unobservability problems, and thus, no other information such as gyroscopes or accelerometers is required in our system. Our SLAM algorithm generates sequences of conditionally independent local maps that can share information related to the camera motion and common features being tracked. The system computes the full map using the novel conditionally independent divide and conquer algorithm, which allows constant time operation most of the time, with linear time updates to compute the full map. To demonstrate the robustness and scalability of our system, we show experimental results in indoor and outdoor urban environments of 210 m and 140 m loop trajectories, with the stereo camera being carried in hand by a person walking at normal walking speeds of 4--5 km h."
]
} |
1908.08972 | 2969766398 | Deep Neural Networks (DNNs) have achieved state-of-the-art accuracy performance in many tasks. However, recent works have pointed out that the outputs provided by these models are not well-calibrated, seriously limiting their use in critical decision scenarios. In this work, we propose to use a decoupled Bayesian stage, implemented with a Bayesian Neural Network (BNN), to map the uncalibrated probabilities provided by a DNN to calibrated ones, consistently improving calibration. Our results evidence that incorporating uncertainty provides more reliable probabilistic models, a critical condition for achieving good calibration. We report a generous collection of experimental results using high-accuracy DNNs in standardized image classification benchmarks, showing the good performance, flexibility and robust behavior of our approach with respect to several state-of-the-art calibration methods. Code for reproducibility is provided. | On the side of BNNs, @cite_18 connect Bernoulli dropout with BNNs, and @cite_29 formalize Gaussian dropout as a Bayesian approach. In @cite_5 , novel BNNs are proposed, using RealNVP @cite_22 to implement a normalizing flow @cite_56 , auxiliary variables and local reparameterization . None of these approaches measure calibration performance explicitly on DNNs, as we do. For instance, @cite_5 and @cite_21 evaluate uncertainty by training on one dataset and use it on another, expecting a maximum entropy output distribution. More recently, @cite_2 propose a scalable inference algorithm that is also asymptotically accurate as MCMC algorithms and @cite_34 propose a deterministic way of computing the ELBO to reduce the variance of the estimator to 0, allowing for faster convergence. They also propose a hierarchical prior on the parameters. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_56",
"@cite_2",
"@cite_5",
"@cite_34"
],
"mid": [
"2907176385",
"2949496227",
"2589209256",
"1826234144"
],
"abstract": [
"As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other multiplicative noise based variational distributions has not been investigated in depth. We evaluated Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We tested the calibration of the probabilistic predictions of Bayesian convolutional neural networks (CNNs) on MNIST and CIFAR-10. Sampling at prediction time increased the calibration of the DNNs' probabalistic predictions. Sampling weights, whether Gaussian or Bernoulli, led to more robust representation of uncertainty compared to sampling of units. However, using either Gaussian or Bernoulli dropout led to increased test set classification accuracy. Based on these findings we used both Bernoulli dropout and Gaussian dropconnect concurrently, which we show approximates the use of a spike-and-slab variational distribution without increasing the number of learned parameters. We found that spike-and-slab sampling had higher test set performance than Gaussian dropconnect and more robustly represented its uncertainty compared to Bernoulli dropout.",
"Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes equals the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors entailing rich structures, including Gaussian processes and implicit stochastic processes. Empirically, we find fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and scale to large datasets.",
"As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modelling uncertainty is one of the key features of Bayesian methods. Bayesian DNNs that use dropout-based variational distributions and scale to complex tasks have recently been proposed. We evaluate Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We compare these Bayesian DNNs ability to represent their uncertainty about their outputs through sampling during inference. We tested the calibration of these Bayesian fully connected and convolutional DNNs on two visual inference tasks (MNIST and CIFAR-10). By adding different levels of Gaussian noise to the test images, we assessed how these DNNs represented their uncertainty about regions of input space not covered by the training set. These Bayesian DNNs represented their own uncertainty more accurately than traditional DNNs with a softmax output. We find that sampling of weights, whether Gaussian or Bernoulli, led to more accurate representation of uncertainty compared to sampling of units. However, sampling units using either Gaussian or Bernoulli dropout led to increased convolutional neural network (CNN) classification accuracy. Based on these findings we use both Bernoulli dropout and Gaussian dropconnect concurrently, which approximates the use of a spike-and-slab variational distribution. We find that networks with spike-and-slab sampling combine the advantages of the other methods: they classify with high accuracy and robustly represent the uncertainty of their classifications for all tested architectures.",
"We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Since the implementation of deep learning became practical, text detection techniques are based on neural networks. A deep learning based method @cite_2 uses fully convolutional network (FCN) to find a probability that pixels belong to a text area. After applying maximally stable extremal regions (MSER), a shortened FCN was utilized to acquire the character centroids and with the help of intensity and geometric criteria remove false candidates. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2339589954",
"2740819638",
"2952365771",
"2797933562"
],
"abstract": [
"In this paper, we propose a novel approach for text detection in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine procedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Finally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple orientations, languages and fonts. The proposed method consistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.",
"Scene text detection has attracted great attention these years. Text potentially exist in a wide variety of images or videos and play an important role in understanding the scene. In this paper, we present a novel text detection algorithm which is composed of two cascaded steps: (1) a multi-scale fully convolutional neural network (FCN) is proposed to extract text block regions, (2) a novel instance (word or line) aware segmentation is designed to further remove false positives and obtain word instances. The proposed algorithm can accurately localize word or text line in arbitrary orientations, including curved text lines which cannot be handled in a lot of other frameworks. Our algorithm achieved state-of-the-art performance in ICDAR 2013 (IC13), ICDAR 2015 (IC15) and CUTE80 and Street View Text (SVT) benchmark datasets.",
"In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.",
"Fully convolutional neural network (FCN) has been dominating the game of face detection task for a few years with its congenital capability of sliding-window-searching with shared kernels, which boiled down all the redundant calculation, and most recent state-of-the-art methods such as Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one question: Can we find a universal strategy to further accelerate FCN with higher accuracy, so could accelerate all the recent FCN-based methods? To analyze this, we decompose the face searching space into two orthogonal directions, scale' and spatial'. Only a few coordinates in the space expanded by the two base vectors indicate foreground. So if FCN could ignore most of the other points, the searching space and false alarm should be significantly boiled down. Based on this philosophy, a novel method named scale estimation and spatial attention proposal ( @math ) is proposed to pay attention to some specific scales and valid locations in the image pyramid. Furthermore, we adopt a masked-convolution operation based on the attention result to accelerate FCN calculation. Experiments show that FCN-based method RPN can be accelerated by about @math with the help of @math and masked-FCN and at the same time it can also achieve the state-of-the-art on FDDB, AFW and MALF face detection benchmarks as well."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Shi in @cite_4 proposed to find segments of words and connections between them. The whole detection process of segments and links was done in a single pass of a CNN named SegLink in a fully-convolutional manner with depth-first search (DFS) and bounding box creation postprocessing. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2605076167",
"2950143680",
"2472159136",
"2950012948"
],
"abstract": [
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.",
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.",
"We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14 , 90.26 , and 86.77 respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.",
"In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Zhou proposed a similar strategy in @cite_3 where a variety of postprocessing steps were eliminated by performing most of the calculations in a single U-Net-like @cite_12 FCN named EAST which outputs word box parameters by itself. Results of computations are filtered by non-maximum suppression (NMS) and thresholding. The length of the word to be detected is limited by a receptive field of output pixels. | {
"cite_N": [
"@cite_12",
"@cite_3"
],
"mid": [
"2963977642",
"2472159136",
"2395360388",
"2126053700"
],
"abstract": [
"We present a novel single-shot text detector that directly outputs word-level bounding boxes in a natural image. We propose an attention mechanism which roughly identifies text regions via an automatically learned attentional map. This substantially suppresses background interference in the convolutional features, which is the key to producing accurate inference of words, particularly at extremely small sizes. This results in a single model that essentially works in a coarse-to-fine manner. It departs from recent FCN-based text detectors which cascade multiple FCN models to achieve an accurate prediction. Furthermore, we develop a hierarchical inception module which efficiently aggregates multi-scale inception features. This enhances local details, and also encodes strong context information, allowing the detector to work reliably on multi-scale and multi-orientation text with single-scale images. Our text detector achieves an F-measure of 77 on the ICDAR 2015 benchmark, advancing the state-of-the-art results in [18, 28]. Demo is available at: http: sstd.whuang.org .",
"We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14 , 90.26 , and 86.77 respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.",
"In this paper, we develop a novel unified framework called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the inception region proposal network (Inception-RPN) and design a set of text characteristic prior bounding boxes to achieve high word recall with only hundred level candidate proposals. Next, we present a powerful textdetection network that embeds ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP) for text and non-text classification and accurate localization. Finally, we apply an iterative bounding box voting scheme to pursue high recall in a complementary manner and introduce a filtering algorithm to retain the most suitable bounding box, while removing redundant inner and outer boxes for each text instance. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"The text of this paper has passed across many Internet routers on its way to the reader, but some routers will not pass it along unfettered because of censored words it contains. We present two sets of results: 1) Internet measurements of keyword filtering by the Great “Firewall” of China (GFC); and 2) initial results of using latent semantic analysis as an efficient way to reproduce a blacklist of censored words via probing. Our Internet measurements suggest that the GFC’s keyword filtering is more a panopticon than a firewall, i.e., it need not block every illicit word, but only enough to promote self-censorship. China’s largest ISP, ChinaNET, performed 83.3 of all filtering of our probes, and 99.1 of all filtering that occurred at the first hop past the Chinese border. Filtering occurred beyond the third hop for 11.8 of our probes, and there were sometimes as many as 13 hops past the border to a filtering router. Approximately 28.3 of the Chinese hosts we sent probes to were reachable along paths that were not filtered at all. While more tests are needed to provide a definitive picture of the GFC’s implementation, our results disprove the notion that GFC keyword filtering is a firewall strictly at the border of China’s Internet. While evading a firewall a single time defeats its purpose, it would be necessary to evade a panopticon almost every time. Thus, in lieu of evasion, we propose ConceptDoppler, an architecture for maintaining a censorship “weather report” about what keywords are filtered over time. Probing with potentially filtered keywords is arduous due to the GFC’s complexity and can be invasive if not done efficiently. Just as an understanding of the mixing of gases preceded effective weather reporting, understanding of the relationship between keywords and concepts is essential for tracking Internet censorship. We show that LSA can effectively pare down a corpus of text and cluster filtered keywords for efficient probing, present 122 keywords we discovered by probing, and underscore the need for tracking and studying censorship blacklists by discovering some surprising blacklisted keywords such as l�‡ (conversion rate), �„K— (Mein Kampf), and ýE0(NfT�� (International geological scientific federation (Beijing))."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | An ArbiText network @cite_8 based on the Single Shot Detector (SSD) applies the circle anchors to replace bounding boxes which should be more robust to orientation variations. Authors also applied pyramid pooling to preserve low-level features in deeper layers. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2807940804",
"2895911141",
"2750317406",
"2771082502"
],
"abstract": [
"We present a small object sensitive method for object detection. Our method is built based on SSD (Single Shot MultiBox Detector ( 2016)), a simple but effective deep neural network for image object detection. The discrete nature of anchor mechanism used in SSD, however, may cause misdetection for the small objects located at gaps between the anchor boxes. SSD performs better for small object detection after circular shifts of the input image. Therefore, auxiliary feature maps are generated by conducting circular shifts over lower extra feature maps in SSD for small-object detection, which is equivalent to shifting the objects in order to fit the locations of anchor boxes. We call our proposed system Shifted SSD. Moreover, pinpoint accuracy of localization is of vital importance to small objects detection. Hence, two novel methods called Smooth NMS and IoU-Prediction module are proposed to obtain more precise locations. Then for video sequences, we generate trajectory hypothesis to obtain predicted locations in a new frame for further improved performance. Experiments conducted on PASCAL VOC 2007, along with MS COCO, KITTI and our small object video datasets, validate that both mAP and recall are improved with different degrees and the speed is almost the same as SSD.",
"In this paper, we propose a novel face detection network with three novel contributions that address three key aspects of face detection, including better feature learning, progressive loss design and anchor assign based data augmentation, respectively. First, we propose a Feature Enhance Module (FEM) for enhancing the original feature maps to extend the single shot detector to dual shot detector. Second, we adopt Progressive Anchor Loss (PAL) computed by two different sets of anchors to effectively facilitate the features. Third, we use an Improved Anchor Matching (IAM) by integrating novel anchor assign strategy into data augmentation to provide better initialization for the regressor. Since these techniques are all related to the two-stream design, we name the proposed network as Dual Shot Face Detector (DSFD). Extensive experiments on popular benchmarks, WIDER FACE and FDDB, demonstrate the superiority of DSFD over the state-of-the-art face detectors.",
"This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"Arbitrary-oriented text detection in the wild is a very challenging task, due to the aspect ratio, scale, orientation, and illumination variations. In this paper, we propose a novel method, namely Arbitrary-oriented Text (or ArbText for short) detector, for efficient text detection in unconstrained natural scene images. Specifically, we first adopt the circle anchors rather than the rectangular ones to represent bounding boxes, which is more robust to orientation variations. Subsequently, we incorporate a pyramid pooling module into the Single Shot MultiBox Detector framework, in order to simultaneously explore the local and global visual information, which can, therefore, generate more confidential detection results. Experiments on established scene-text datasets, such as the ICDAR 2015 and MSRA-TD500 datasets, have demonstrated the supe rior performance of the proposed method, compared to the state-of-the-art approaches."
]
} |
1908.08994 | 2969915736 | Text detection in natural images is a challenging but necessary task for many applications. Existing approaches utilize large deep convolutional neural networks making it difficult to use them in real-world tasks. We propose a small yet relatively precise text extraction method. The basic component of it is a convolutional neural network which works in a fully-convolutional manner and produces results at multiple scales. Each scale output predicts whether a pixel is a part of some word, its geometry, and its relation to neighbors at the same scale and between scales. The key factor of reducing the complexity of the model was the utilization of depthwise separable convolution, linear bottlenecks, and inverted residuals. Experiments on public datasets show that the proposed network can effectively detect text while keeping the number of parameters in the range of 1.58 to 10.59 million in different configurations. | Liu in @cite_13 combined text detection and recognition parts in one end-to-end CNN. The backbone of the network is the Feature Pyramid Network which incorporates residual operations from ResNet-50 @cite_15 . The network, in text detection part, outputs text probability, bounding box distances in four directions, and a rotation angle of the bounding box. The smallest real-time version contains 29 million parameters. | {
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"2962781062",
"2725486421",
"2462457117",
"2565639579"
],
"abstract": [
"In this paper, we present a new Mask R-CNN based text detection approach which can robustly detect multi-oriented and curved text from natural scene images in a unified manner. To enhance the feature representation ability of Mask R-CNN for text detection tasks, we propose to use the Pyramid Attention Network (PAN) as a new backbone network of Mask R-CNN. Experiments demonstrate that PAN can suppress false alarms caused by text-like backgrounds more effectively. Our proposed approach has achieved superior performance on both multi-oriented (ICDAR-2015, ICDAR-2017 MLT) and curved (SCUT-CTW1500) text detection benchmark tasks by only using single-scale and single-model testing.",
"In this paper, we propose a novel method called Rotational Region CNN (R2CNN) for detecting arbitrary-oriented texts in natural scene images. The framework is based on Faster R-CNN [1] architecture. First, we use the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations. Second, for each axis-aligned text box proposed by RPN, we extract its pooled features with different pooled sizes and the concatenated features are used to simultaneously predict the text non-text score, axis-aligned box and inclined minimum area box. At last, we use an inclined non-maximum suppression to get the detection results. Our approach achieves competitive results on text detection benchmarks: ICDAR 2015 and ICDAR 2013.",
"Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40 vs the best previous precision of 74.00 for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14 accuracy on CUB-2011.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available."
]
} |
1908.08979 | 2969556441 | Various psychological factors affect how individuals express emotions. Yet, when we collect data intended for use in building emotion recognition systems, we often try to do so by creating paradigms that are designed just with a focus on eliciting emotional behavior. Algorithms trained with these types of data are unlikely to function outside of controlled environments because our emotions naturally change as a function of these other factors. In this work, we study how the multimodal expressions of emotion change when an individual is under varying levels of stress. We hypothesize that stress produces modulations that can hide the true underlying emotions of individuals and that we can make emotion recognition algorithms more generalizable by controlling for variations in stress. To this end, we use adversarial networks to decorrelate stress modulations from emotion representations. We study how stress alters acoustic and lexical emotional predictions, paying special attention to how modulations due to stress affect the transferability of learned emotion recognition models across domains. Our results show that stress is indeed encoded in trained emotion classifiers and that this encoding varies across levels of emotions and across the lexical and acoustic modalities. Our results also show that emotion recognition models that control for stress during training have better generalizability when applied to new domains, compared to models that do not control for stress during training. We conclude that is is necessary to consider the effect of extraneous psychological factors when building and testing emotion recognition models. | One group of methods have considered confounding factors that are either singularly labeled or cannot be labeled. Ben-David et. al @cite_49 showed that a classifier trained to predict the sentiment of reviews can implicitly learn to predict the category of the products. The authors used an adversarial multi-task classifier to learn domain invariant sentiment representations. Shinohara @cite_38 used an adversarial approach to train noise-robust networks for automatic speech recognition. They used domain (i.e., background noise) as the adversarial task while training the model to obtain representations that are both senone-discriminative and domain-invariant. In emotion recognition applications, @cite_44 used domain adversarial networks to improve cross-corpus generalization for emotion recognition tasks. | {
"cite_N": [
"@cite_44",
"@cite_38",
"@cite_49"
],
"mid": [
"2767382337",
"2964139811",
"2962687275",
"2767019926"
],
"abstract": [
"We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https: github.com mil-tokyo MCD_DA",
"It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like are relatively rare comparing to other labels like happy or sad . In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework with a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN's performance. Empirical results show that we can obtain 5 10 increase in the classification accuracy after employing the GAN-based data augmentation techniques."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | The task of reconstructing a full classical description -- the density matrix @math -- of a @math -dimensional quantum system from experimental data is one of the most fundamental problems in quantum statistics, see e.g. @cite_52 @cite_13 @cite_12 @cite_31 and references therein. Sample-optimal protocols, i.e. estimation techniques that get by with a minimal number of measurement repetitions, have only been developed recently. Information-theoretic bounds assert that an order of @math state copies are necessary to fully reconstruct @math @cite_26 . Constructive protocols @cite_29 @cite_26 saturate this bound, but require entangled circuits and measurements that act on all state copies simultaneously. More tractable measurement procedures, where each copy of the state is measured independently, require an order of @math measurements @cite_26 . This more stringent bound is saturated by low rank matrix recovery @cite_4 @cite_44 @cite_40 and projected least squares estimation @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_52",
"@cite_44",
"@cite_40",
"@cite_31",
"@cite_13",
"@cite_12"
],
"mid": [
"2649051464",
"2963583445",
"1529624360",
"2797355014"
],
"abstract": [
"It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. Previously, it was known only that estimating states to error @math in trace distance required @math copies for a @math -dimensional density matrix of rank @math . Here, we give a theoretical measurement scheme (POVM) that requires @math copies to estimate @math to error @math in infidelity, and a matching lower bound up to logarithmic factors. This implies @math copies suffice to achieve error @math in trace distance. We also prove that for independent (product) measurements, @math copies are necessary in order to achieve error @math in infidelity. For fixed @math , our measurement can be implemented on a quantum computer in time polynomial in @math .",
"Abstract We study the recovery of Hermitian low rank matrices X ∈ C n × n from undersampled measurements via nuclear norm minimization. We consider the particular scenario where the measurements are Frobenius inner products with random rank-one matrices of the form a j a j ⁎ for some measurement vectors a 1 , … , a m , i.e., the measurements are given by b j = tr ( X a j a j ⁎ ) . The case where the matrix X = x x ⁎ to be recovered is of rank one reduces to the problem of phaseless estimation (from measurements b j = | 〈 x , a j 〉 | 2 ) via the PhaseLift approach, which has been introduced recently. We derive bounds for the number m of measurements that guarantee successful uniform recovery of Hermitian rank r matrices, either for the vectors a j , j = 1 , … , m , being chosen independently at random according to a standard Gaussian distribution, or a j being sampled independently from an (approximate) complex projective t-design with t = 4 . In the Gaussian case, we require m ≥ C r n measurements, while in the case of 4-designs we need m ≥ Cr n log ( n ) . Our results are uniform in the sense that one random choice of the measurement vectors a j guarantees recovery of all rank r-matrices simultaneously with high probability. Moreover, we prove robustness of recovery under perturbation of the measurements by noise. The result for approximate 4-designs generalizes and improves a recent bound on phase retrieval due to Gross, Krahmer and Kueng. In addition, it has applications in quantum state tomography. Our proofs employ the so-called bowling scheme which is based on recent ideas by Mendelson and Koltchinskii.",
"We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed.",
"We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with @math constraint matrices, each of dimension @math , rank @math , and sparsity @math . The first algorithm assumes an input model where one is given access to entries of the matrices at unit cost. We show that it has run time @math , where @math is the error. This gives an optimal dependence in terms of @math and quadratic improvement over previous quantum algorithms when @math . The second algorithm assumes a fully quantum input model in which the matrices are given as quantum states. We show that its run time is @math , with @math an upper bound on the trace-norm of all input matrices. In particular the complexity depends only poly-logarithmically in @math and polynomially in @math . We apply the second SDP solver to the problem of learning a good description of a quantum state with respect to a set of measurements: Given @math measurements and copies of an unknown state @math , we show we can find in time @math a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as @math on the @math measurements, up to error @math . The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by \"quantizing\" classical SDP solvers based on the matrix multiplicative weight method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension, which could be of independent interest."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Restricting attention to highly structured subsets of quantum states sometimes allows for overcoming the exponential bottleneck that plagues general tomography. Matrix product state (MPS) tomography @cite_41 is the most prominent example for such an approach. It only requires a polynomial number of samples, provided that the underlying quantum state is well approximated by a MPS with low bond dimension. In quantum many body physics this assumption is often justifiable @cite_54 . However, MPS representations of general states have exponentially large bond dimension. In this case, MPS tomography offers no advantage over general tomography. | {
"cite_N": [
"@cite_41",
"@cite_54"
],
"mid": [
"1988304269",
"2171253836",
"1529624360",
"2565722603"
],
"abstract": [
"We construct a practically implementable classical processing for the Bennett-Brassard 1984 (BB84) protocol and the six-state protocol that fully utilizes the accurate channel estimation method, which is also known as the quantum tomography. Our proposed processing yields at least as high a key rate as the standard processing by Shor and Preskill. We show two examples of quantum channels over which the key rate of our proposed processing is strictly higher than the standard processing. In the second example, the BB84 protocol with our proposed processing yields a positive key rate even though the so-called error rate is higher than the 25 limit.",
"Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.",
"We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed.",
"Traditionally quantum state tomography is used to characterize a quantum state, but it becomes exponentially hard with the system size. An alternative technique, matrix product state tomography, is shown to work well in practical situations."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Direct fidelity estimation is a procedure that allows for predicting a single pure target fidelity @math up to accuracy @math . The best-known technique is based on few Pauli measurements that are selected randomly using importance sampling @cite_49 . The required number of samples depends on the target: it can range from a dimension-independent order of @math (if @math is a stablizer state) to roughly @math in the worst case. | {
"cite_N": [
"@cite_49"
],
"mid": [
"2090368878",
"2024405371",
"2303223340",
"1966394605"
],
"abstract": [
"We describe a simple method for certifying that an experimental device prepares a desired quantum state ρ. Our method is applicable to any pure state ρ, and it provides an estimate of the fidelity between ρ and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels.",
"Abstract Modifications of Prony's classical technique for estimating rate constants in exponential fitting problems have many contemporary applications. In this article the consistency of Prony's method and of related algorithms based on maximum likelihood is discussed as the number of observations n → ∞ by considering the simplest possible models for fitting sums of exponentials to observed data. Two sampling regimes are relevant, corresponding to transient problems and problems of frequency estimation, each of which is associated with rather different kinds of behavior. The general pattern is that the stronger results are obtained for the frequency estimation problem. However, the algorithms considered are all scaling dependent and consistency is not automatic. A new feature that emerges is the importance of an appropriate choice of scale in order to ensure consistency of the estimates in certain cases. The tentative conclusion is that algorithms referred to as Objective function Reweighting Algorithms ...",
"We propose a new method for pointwise estimation of monotone, uni- modal and U-shaped failure rates, under a right-censoring mechanism, using non- parametric likelihood ratios. The asymptotic distribution of the likelihood ratio is pivotal, though non-standard, and can therefore be used to construct asymptotic confidence intervals for the failure rate at a point of interest, via inversion. Major advantages of the new method lie in the facts that it completely avoids estima- tion of nuisance parameters, or the choice of a bandwidth tuning parameter, and is extremely easy to implement. The new method is shown to perform competi- tively in simulations, and is illustrated on a data set involving time to diagnosis of schizophrenia in the Jerusalem Perinatal Cohort Schizophrenia Study.",
"Direct imaging technology is an effective and convenient method for the estimation of bubble size distribution (BSD). However, overlapping bubble has an influence on BSD when gas holdup is more than 1 . In this paper, we present a new method of overlapping elliptical bubble recognition to determine bubble size. The method mainly includes two steps: contour segmentation and segment grouping. Contour segmentation is on the assumption that the concave points in the dominant point sequence are always the connecting points, and segment grouping is mainly based on the average distance deviation criterion. Both simulated images and real bubble images are used to evaluate this new method. The results show that it is effective in the recognition of overlapping elliptical bubbles and have a potential in other elliptical object recognition. In the last, two methods are used for BSD estimation. It is found that the bubble size (such d10 or d32) estimated by the ignore method is slightly smaller than that estimated by the recognition method."
]
} |
1908.08909 | 2969984239 | Predicting features of complex, large-scale quantum systems is essential to the characterization and engineering of quantum architectures. We present an efficient approach for predicting a large number of linear features using classical shadows obtained from very few quantum measurements. This approach is guaranteed to accurately predict @math linear functions with bounded Hilbert-Schmidt norm from only @math measurement repetitions. This sampling rate is completely independent of the system size and saturates fundamental lower bounds from information theory. We support our theoretical findings with numerical experiments over a wide range of problem sizes (2 to 162 qubits). These highlight advantages compared to existing machine learning approaches. | Shadow tomography aims at simultaneously estimating the probability associated with @math 2-outcome measurements up to accuaracy @math : @math , where each @math is a positive semidefinite matrix whose with operator norm at most one @cite_15 @cite_5 @cite_47 . This may be viewed as a generalization of direct fidelity estimation. The best existing result is due to Aaronson @cite_47 who showed that copies of the unknown state The scaling symbol @math suppresses logarithmic expressions in other problem-specific parameters. suffice to achieve this task. In a nutshell, his protocol is based on gently measuring the 2-outcome measurements one-by-one and subsequently (partially) reverting the perturbative effects a measurement exerts on quantum states. This task is achieved by explicit quantum circuits of exponential size that act on all copies of the unknown state simultaneously. This rather intricate procedure bypasses the no-go result advertised in Theorem and results in a sampling rate that is independent of the measurement in question -- only their cardinality @math matters. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_47"
],
"mid": [
"2963956175",
"2171253836",
"1988304269",
"1971384536"
],
"abstract": [
"We introduce the problem of *shadow tomography*: given an unknown D-dimensional quantum mixed state ρ, as well as known two-outcome measurements E1,…,EM, estimate the probability that Ei accepts ρ, to within additive error e, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only O( e−5·log4 M·logD) copies. This means, for example, that we can learn the behavior of an arbitrary n-qubit state, on *all* accepting rejecting circuits of some fixed polynomial size, by measuring only nO( 1) copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.",
"Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.",
"We construct a practically implementable classical processing for the Bennett-Brassard 1984 (BB84) protocol and the six-state protocol that fully utilizes the accurate channel estimation method, which is also known as the quantum tomography. Our proposed processing yields at least as high a key rate as the standard processing by Shor and Preskill. We show two examples of quantum channels over which the key rate of our proposed processing is strictly higher than the standard processing. In the second example, the BB84 protocol with our proposed processing yields a positive key rate even though the so-called error rate is higher than the 25 limit.",
"Direct quantum state tomography—deducing the state of a system from measurements—is mostly unfeasible due to the exponential scaling of measurement number with system size. The authors present two new schemes, which scale linearly in this respect, and can be applied to a wide range of quantum states."
]
} |
1908.08474 | 2969551072 | The Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. The Shapley value [1] is known to be the unique method that satisfies certain desirable properties, and this motivates its use. Unfortunately, despite this uniqueness result, there are a multiplicity of Shapley values used in explaining a model's prediction. This is because there are many ways to apply the Shapley value that differ in how they reference the model, the training data, and the explanation context. In this paper, we study an approach that applies the Shapley value to conditional expectations (CES) of sets of features (cf. [2]) that subsumes several prior approaches within a common framework. We provide the first algorithm for the general version of CES. We show that CES can result in counterintuitive attributions in theory and in practice (we study a diabetes prediction task); for instance, CES can assign non-zero attributions to features that are not referenced by the model. In contrast, we show that an approach called the Baseline Shapley (BS) does not exhibit counterintuitive attributions; we support this claim with a uniqueness (axiomatic) result. We show that BS is a special case of CES, and CES with an independent feature distribution coincides with a randomized version of BS. Thus, BS fits into the CES framework, but does not suffer from many of CES's deficiencies. | The first and second approaches solve a different problem (of feature importance across all the training data), and we will ignore them for the most part. Notice that the rest are solving the attribution problem, @cite_13 unifies several of these methods under a common framework based on conditional expectations. and they all apply the Shapley value, but they differ in how they switch a feature off', and consequently give different results. In this paper, we attempt to pick between these methods using the lens of axiomatization. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2662684858",
"2120708938",
"2153929442",
"2137901802"
],
"abstract": [
"Note that a newer expanded version of this paper is now available at: arXiv:1802.03888 It is critical in many applications to understand what features are important for a model, and why individual predictions were made. For tree ensemble methods these questions are usually answered by attributing importance values to input features, either globally or for a single prediction. Here we show that current feature attribution methods are inconsistent, which means changing the model to rely more on a given feature can actually decrease the importance assigned to that feature. To address this problem we develop fast exact solutions for SHAP (SHapley Additive exPlanation) values, which were recently shown to be the unique additive feature attribution method based on conditional expectations that is both consistent and locally accurate. We integrate these improvements into the latest version of XGBoost, demonstrate the inconsistencies of current methods, and show how using SHAP values results in significantly improved supervised clustering performance. Feature importance values are a key part of understanding widely used models such as gradient boosting trees and random forests, so improvements to them have broad practical implications.",
"The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the \"in-domain\" test data is drawn from a distribution that is related, but not identical, to the \"out-of-domain\" distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain.",
"Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.",
"Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution but may have plenty of labeled data from multiple related sources with different distributions. The difference in distributions may be both in marginal and conditional probabilities. Most of the existing domain adaptation work focuses on the marginal probability distribution difference between the domains, assuming that the conditional probabilities are similar. However in many real world applications, conditional probability distribution differences are as commonplace as marginal probability differences. In this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage), with the target domain data. The weights for minimizing the marginal probability differences are estimated independently, while the weights for minimizing conditional probability differences are computed simultaneously by exploiting the potential interaction among multiple sources. We also provide a theoretical analysis on the generalization performance of the proposed multi-source domain adaptation formulation using the weighted Rademacher complexity measure. Empirical comparisons with existing state-of-the-art domain adaptation methods using three real-world datasets demonstrate the effectiveness of the proposed approach."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Crowd Counting: Numerous deep learning based methods @cite_33 @cite_13 @cite_20 @cite_40 @cite_46 @cite_6 @cite_10 have been proposed for crowd counting. These methods have various network structures and the mainstream is a multiscale architecture, which extracts multiple features from different columns branches of networks to handle the scale variation of people. For instance, @cite_21 combined a deep network and a shallow network to learn scale-robust features. @cite_0 developed a multi-column CNN to generate density maps. HydraCNN @cite_45 fed a pyramid of image patches into networks to estimate the count. CP-CNN @cite_3 proposed a Contextual Pyramid CNN to incorporate the global and local contextual information for crowd counting. @cite_4 built an encoder-decoder network with multiple scale aggregation modules. However, the issue of the huge variation of people's scales is still far from being fully solved. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_10",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_40",
"@cite_45",
"@cite_46",
"@cite_13",
"@cite_20"
],
"mid": [
"2741077351",
"2913127348",
"2514654788",
"2963035940"
],
"abstract": [
"We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd.",
"Gatherings of thousands to millions of people frequently occur for an enormous variety of events, and automated counting of these high-density crowds is useful for safety, management, and measuring significance of an event. In this work, we show that the regularly accepted labeling scheme of crowd density maps for training deep neural networks is less effective than our alternative inverse k-nearest neighbor (i @math NN) maps, even when used directly in existing state-of-the-art network structures. We also provide a new network architecture MUD-i @math NN, which uses multi-scale upsampling via transposed convolutions to take full advantage of the provided i @math NN labeling. This upsampling combined with the i @math NN maps further improves crowd counting accuracy. Our new network architecture performs favorably in comparison with the state-of-the-art. However, our labeling and upsampling techniques are generally applicable to existing crowd counting architectures.",
"Crowd counting is a very challenging task in crowded scenes due to heavy occlusions, appearance variations and perspective distortions. Current crowd counting methods typically operate on an image patch level with overlaps, then sum over the patches to get the final count. In this paper, we propose an end-to-end convolutional neural network (CNN) architecture that takes a whole image as its input and directly outputs the counting result. While making use of sharing computations over overlapping regions, our method takes advantages of contextual information when predicting both local and global count. In particular, we first feed the image to a pre-trained CNN to get a set of high level features. Then the features are mapped to local counting numbers using recurrent network layers with memory cells. We perform the experiments on several challenging crowd counting datasets, which achieve the state-of-the-art results and demonstrate the effectiveness of our method.",
"We present a novel method called Contextual Pyramid CNN (CP-CNN) for generating high-quality crowd density and count estimation by explicitly incorporating global and local contextual information of crowd images. The proposed CP-CNN consists of four modules: Global Context Estimator (GCE), Local Context Estimator (LCE), Density Map Estimator (DME) and a Fusion-CNN (F-CNN). GCE is a VGG-16 based CNN that encodes global context and it is trained to classify input images into different density classes, whereas LCE is another CNN that encodes local context information and it is trained to perform patch-wise classification of input images into different density classes. DME is a multi-column architecture-based CNN that aims to generate high-dimensional feature maps from the input image which are fused with the contextual information estimated by GCE and LCE using F-CNN. To generate high resolution and high-quality density maps, F-CNN uses a set of convolutional and fractionally-strided convolutional layers and it is trained along with the DME in an end-to-end fashion using a combination of adversarial loss and pixellevel Euclidean loss. Extensive experiments on highly challenging datasets show that the proposed method achieves significant improvements over the state-of-the-art methods."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Conditional Random Fields: In the field of computer vision, CRFs have been exploited to refine the features and outputs of convolutional neural networks (CNN) with a message passing mechanism @cite_25 . For instance, @cite_29 used CRFs to refine the semantic segmentation maps of CNN by modeling the relationship among pixels. @cite_44 fused multiple features with Attention-Gated CRFs to produce richer representations for contour prediction. @cite_28 introduced an inter-view message passing module based on CRFs to enhance the view-specific features for action recognition. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_28",
"@cite_25"
],
"mid": [
"1909515874",
"2254177447",
"2962872526",
"2114930007"
],
"abstract": [
"Large amounts of available training data and increasing computing power have led to the recent success of deep convolutional neural networks (CNN) on a large number of applications. In this paper, we propose an effective semantic pixel labelling using CNN features, hand-crafted features and Conditional Random Fields (CRFs). Both CNN and hand-crafted features are applied to dense image patches to produce per-pixel class probabilities. The CRF infers a labelling that smooths regions while respecting the edges present in the imagery. The method is applied to the ISPRS 2D semantic labelling challenge dataset with competitive classification accuracy.",
"Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.",
"Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.",
"Conditional Random Fields (CRFs) are an effective tool for a variety of different data segmentation and labeling tasks including visual scene interpretation, which seeks to partition images into their constituent semantic-level regions and assign appropriate class labels to each region. For accurate labeling it is important to capture the global context of the image as well as local information. We introduce a CRF based scene labeling model that incorporates both local features and features aggregated over the whole image or large sections of it. Secondly, traditional CRF learning requires fully labeled datasets which can be costly and troublesome to produce. We introduce a method for learning CRFs from datasets with many unlabeled nodes by marginalizing out the unknown labels so that the log-likelihood of the known ones can be maximized by gradient ascent. Loopy Belief Propagation is used to approximate the marginals needed for the gradient and log-likelihood calculations and the Bethe free-energy approximation to the log-likelihood is monitored to control the step size. Our experimental results show that effective models can be learned from fragmentary labelings and that incorporating top-down aggregate features significantly improves the segmentations. The resulting segmentations are compared to the state-of-the-art on three different image datasets."
]
} |
1908.08692 | 2969835258 | Automatic estimation of the number of people in unconstrained crowded scenes is a challenging task and one major difficulty stems from the huge scale variation of people. In this paper, we propose a novel Deep Structured Scale Integration Network (DSSINet) for crowd counting, which addresses the scale variation of people by using structured feature representation learning and hierarchically structured loss function optimization. Unlike conventional methods which directly fuse multiple features with weighted average or concatenation, we first introduce a Structured Feature Enhancement Module based on conditional random fields (CRFs) to refine multiscale features mutually with a message passing mechanism. In this module, each scale-specific feature is considered as a continuous random variable and passes complementary information to refine the features at other scales. Second, we utilize a Dilated Multiscale Structural Similarity loss to enforce our DSSINet to learn the local correlation of people's scales within regions of various size, thus yielding high-quality density maps. Extensive experiments on four challenging benchmarks well demonstrate the effectiveness of our method. Specifically, our DSSINet achieves improvements of 9.5 error reduction on Shanghaitech dataset and 24.9 on UCF-QNRF dataset against the state-of-the-art methods. | Multiscale Structural Similarity: MS-SSIM @cite_37 is a widely used metric for image quality assessment. Its formula is based on the luminance, contrast and structure comparisons between the multiscale regions of two images. In @cite_22 , MS-SSIM loss has been successfully applied in image restoration tasks (e.g., image denoising and super-resolution), but its effectiveness has not been verified in high-level tasks (e.g, crowd counting). Recently, @cite_4 combined Euclidean loss and SSIM loss @cite_30 to optimize their network for crowd counting, but they can only capture the local correlation in regions with a fixed size. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_22"
],
"mid": [
"2174358748",
"2963713691",
"2064076387",
"2028790650"
],
"abstract": [
"Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM). Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training deterministic and stochastic autoencoders. For three different architectures, we collected human judgments of the quality of image reconstructions. Observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures ( @math and @math distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. Just as computer vision has advanced through the use of convolutional architectures that mimic the structure of the mammalian visual system, we argue that significant additional advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.",
"Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM) [1]. Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training autoencoders. Human observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures (L 1 and L 2 distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. We argue that significant advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.",
"In this paper, we analyse two well-known objective image quality metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM), and we derive a simple mathematical relationship between them which works for various kinds of image degradations such as Gaussian blur, additive Gaussian white noise, jpeg and jpeg2000 compression. A series of tests realized on images extracted from the Kodak database gives a better understanding of the similarity and difference between the SSIM and the PSNR.",
"A super-resolution (SR) method based on compressive sensing (CS), structural self-similarity (SSSIM), and dictionary learning is proposed for reconstructing remote sensing images. This method aims to identify a dictionary that represents high resolution (HR) image patches in a sparse manner. Extra information from similar structures which often exist in remote sensing images can be introduced into the dictionary, thereby enabling an HR image to be reconstructed using the dictionary in the CS framework. We use the K-Singular Value Decomposition method to obtain the dictionary and the orthogonal matching pursuit method to derive sparse representation coefficients. To evaluate the effectiveness of the proposed method, we also define a new SSSIM index, which reflects the extent of SSSIM in an image. The most significant difference between the proposed method and traditional sample-based SR methods is that the proposed method uses only a low-resolution image and its own interpolated image instead of other HR images in a database. We simulate the degradation mechanism of a uniform 2 × 2 blur kernel plus a downsampling by a factor of 2 in our experiments. Comparative experimental results with several image-quality-assessment indexes show that the proposed method performs better in terms of the SR effectivity and time efficiency. In addition, the SSSIM index is strongly positively correlated with the SR quality."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | The whole concept of adversarial attacks is quite simple: let us slightly change the input to a classifying neural net so that the recognized class will change from correct to some other class (first adversarial attacks were made only on classifiers). The pioneering work @cite_27 formulates the task as follows: | {
"cite_N": [
"@cite_27"
],
"mid": [
"2949103145",
"2604147826",
"2938470389",
"2513314332"
],
"abstract": [
"Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at this http URL.",
"Multiple different approaches of generating adversarial examples have been proposed to attack deep neural networks. These approaches involve either directly computing gradients with respect to the image pixels, or directly solving an optimization on the image pixels. In this work, we present a fundamentally new method for generating adversarial examples that is fast to execute and provides exceptional diversity of output. We efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks. We call such a network an Adversarial Transformation Network (ATN). ATNs are trained to generate adversarial examples that minimally modify the classifier's outputs given the original input, while constraining the new classification to match an adversarial target class. We present methods to train ATNs and analyze their effectiveness targeting a variety of MNIST classifiers as well as the latest state-of-the-art ImageNet classifier Inception ResNet v2.",
"Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn \"patches\" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.",
"Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being \"too linear\" (, 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing, linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | In @cite_27 the authors propose to use a quasi-newton L-BFGS-B method to solve the task formulated above. Simpler and more efficient method called Fast Gradient-Sign Method (FGSM) is proposed in @cite_39 . This method suggests using the gradients with respect to the input and constructing an adversarial image using the following formula: @math (or @math in case of targeted attack). Here @math is a loss function (e.g. cross-entropy) which depends on the weights of the model @math , input @math , and label @math . Note that usually one step is not enough and we need to do a number of iterations described above each time using the projection to the initial input space (e.g. @math ). It is called projected gradient descent (PGD) @cite_20 . | {
"cite_N": [
"@cite_27",
"@cite_20",
"@cite_39"
],
"mid": [
"2157711174",
"2051669046",
"1491622225",
"2964303576"
],
"abstract": [
"We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the resulting subBFGS algorithm is globally convergent in objective function value. We apply its memory-limited variant (subLBFGS) to L2-regularized risk minimization with the binary hinge loss. To extend our algorithm to the multiclass and multilabel settings, we develop a new, efficient, exact line search algorithm. We prove its worst-case time complexity bounds, and show that our line search can also be used to extend a recently developed bundle method to the multiclass and multilabel settings. We also apply the direction-finding component of our algorithm to L1-regularized risk minimization with logistic loss. In all these contexts our methods perform comparable to or better than specialized state-of-the-art solvers on a number of publicly available data sets. An open source implementation of our algorithms is freely available.",
"We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations, where m is any number supplied by the user. The quasi-Newton matrix is updated at every iteration by dropping the oldest information and replacing it by the newest informa- tion. It is shown that the matrices generated have some desirable properties. The resulting algorithms are tested numerically and compared with several well- known methods. 1. Introduction. For the problem of minimizing an unconstrained function of n variables, quasi-Newton methods are widely employed (4). They construct a se- quence of matrices which in some way approximate the hessian of (or its inverse). These matrices are symmetric; therefore, it is necessary to have n(n + l) 2 storage locations for each one. For large dimensional problems it will not be possible to re- tain the matrices in the high speed storage of a computer, and one has to resort to other kinds of algorithms. For example, one could use the methods (Toint (15), Shanno (12)) which preserve the sparsity structure of the hessian, or conjugate gradient methods (CG) which only have to store 3 or 4 vectors. Recently, some CG algorithms have been developed which use a variable amount of storage and which do not require knowledge about the sparsity structure of the problem (2), (7), (8). A disadvantage of these methods is that after a certain number of iterations the quasi-Newton matrix is discarded, and the algorithm is restarted using an initial matrix (usually a diagonal matrix). We describe an algorithm which uses a limited amount of storage and where the quasi-Newton matrix is updated continuously. At every step the oldest information contained in the matrix is discarded and replaced by new one. In this way we hope to have a more up to date model of our function. We will concentrate on the BFGS method since it is considered to be the most efficient. We believe that similar algo- rithms cannot be developed for the other members of the Broyden 0-class (1). Let be the function to be nnnimized, g its gradient and h its hessian. We define",
"We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.",
"In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ( @math ). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst case, the @math -calls complexity is @math to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance @math . We also propose a specific algorithm, namely, a stochastic damped limited-memory BFGS (SdLBFGS) method, that falls under the proposed framework. Moreover, we incorporate the stochastic variance reduced gradient variance reduction technique into the proposed SdLBFGS method and analyze its @math -calls compl..."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | It turns out that using momentum for the iterative procedure of an adversarial example construction is a good way to increase the robustness of the adversarial attack @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2950906520",
"2774644650",
"2799031185",
"2738841453"
],
"abstract": [
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because of the coupling of the attack ability and the transferability. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. We won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.",
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.",
"In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security. In particular, MagNet consisting of an adversary detector and a data reformer is by far one of the strongest defenses in the black-box oblivious attack setting, where the attacker aims to craft transferable adversarial examples from an undefended DNN model to bypass an unknown defense module deployed on the same DNN model. Under this setting, MagNet can successfully defend a variety of attacks in DNNs, including the high-confidence adversarial examples generated by the Carlini and Wagner's attack based on the @math distortion metric. However, in this paper, under the same attack setting we show that adversarial examples crafted based on the @math distortion metric can easily bypass MagNet and mislead the target DNN image classifiers on MNIST and CIFAR-10. We also provide explanations on why the considered approach can yield adversarial examples with superior attack performance and conduct extensive experiments on variants of MagNet to verify its lack of robustness to @math distortion based attacks. Notably, our results substantially weaken the assumption of effective threat models on MagNet that require knowing the deployed defense technique when attacking DNNs (i.e., the gray-box attack setting).",
"Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | All the aforementioned adversarial attacks suggest that we restrict the maximum per-pixel perturbation (in case of image as an input) i.e. use @math norm. Another interesting case is when we do not concentrate on the maximum perturbation but we strive to achieve the fewest possible number of pixels to be attacked ( @math norm). One of the first examples of such attack is the Jacobian-based Saliency Map Attack (JSMA) @cite_6 , where the saliency maps are constructed of the pixels that are the most prone to cause the misclassification. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2951954400",
"2795995837",
"2964077693",
"2962759300"
],
"abstract": [
"When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp norm distortion as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results onMNIST, CIFAR-10, and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions)through adversarial saliency map (, 2016b) and class activation map(, 2016).",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .",
"Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63 to 84 for Fashion MNIST and from 32 to 70 for CIFAR-10."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another extreme case of attack for the @math norm is a one-pixel attack @cite_0 . The authors use differential evolution for this specific case, the algorithm which lies in the class of evolutionary algorithms. It should be mentioned that not only classification neural nets are prone to adversarial attacks. There are also attacks for detection and segmentation @cite_11 . | {
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2765424254",
"2964006983",
"2805380858",
"2906208681"
],
"abstract": [
"Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 70.97 of the natural images can be perturbed to at least one target class by modifying just one pixel with 97.47 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.",
"Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution(DE). It requires less adversarial information(a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 68.36 of the natural images in CIFAR-10 test dataset and 41.22 of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22 and 5.52 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness.",
"Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimation. We introduce GenAttack, a gradient-free optimization technique that uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches. Against MNIST and CIFAR-10 models, GenAttack required roughly 2,126 and 2,568 times fewer queries respectively, than ZOO, the prior state-of-the-art black-box attack. In order to scale up the attack to large-scale high-dimensional ImageNet models, we perform a series of optimizations that further improve the query efficiency of our attack leading to 237 times fewer queries against the Inception-v3 model than ZOO. Furthermore, we show that GenAttack can successfully attack some state-of-the-art ImageNet defenses, including ensemble adversarial training and non-differentiable or randomized input transformations. Our results suggest that evolutionary algorithms open up a promising area of research into effective black-box attacks.",
"We consider adversarial examples in the black-box decision-based scenario. Here, an attacker has access to the final classification of a model, but not its parameters or softmax outputs. Most attacks for this scenario are based either on transferability, which is unreliable, or random sampling, which is often slow. Focusing on the latter, we propose to improve the efficiency of sampling-based attacks with prior beliefs about the target domain. We identify two such priors, image frequency and surrogate gradients, and discuss how to integrate them into a unified sampling procedure. We then formulate the Biased Boundary Attack, which achieves a drastic speedup over the original Boundary Attack. We demonstrate the effectiveness of our approach against an ImageNet classifier. We also showcase a targeted attack for the Google Cloud Vision API, where we craft convincing examples with just a few hundred queries. Finally, we demonstrate that our approach outperforms the state of the art when facing strong defenses: Our attack scored second place in the targeted attack track of the NeurIPS 2018 Adversarial Vision Challenge."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another interesting property of the adversarial attacks is that they are transferable between different neural networks @cite_27 . An attack prepared using one model can successfully confuse another model with different architecture and training dataset. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2570685808",
"2787970001",
"2950864148",
"2774018344"
],
"abstract": [
"An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system.",
"State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can : adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any query, which severely hinder the application of deep learning, especially in the areas where security is crucial. In this work, we systematically study how two classes of factors that might influence the transferability of adversarial examples. One is about model-specific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss function for constructing adversarial examples. Based on these understanding, a simple but effective strategy is proposed to enhance transferability. We call it variance-reduced attack, since it utilizes the variance-reduced gradient to generate adversarial example. The effectiveness is confirmed by a variety of experiments on both CIFAR-10 and ImageNet datasets.",
"An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack this http URL, which is a black-box image classification system.",
"Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin.1"
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Usually, the adversarial attacks which are constructed using the specific architecture and even the weights of the attacked model are called white-box attacks. If the attack has no access to model weights then it is called a black-box attack @cite_36 . | {
"cite_N": [
"@cite_36"
],
"mid": [
"2902364018",
"2783555701",
"2903785932",
"2805380858"
],
"abstract": [
"Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an @math convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.",
"Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76 accuracy on a public MNIST black-box attack challenge.",
"Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.",
"Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimation. We introduce GenAttack, a gradient-free optimization technique that uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches. Against MNIST and CIFAR-10 models, GenAttack required roughly 2,126 and 2,568 times fewer queries respectively, than ZOO, the prior state-of-the-art black-box attack. In order to scale up the attack to large-scale high-dimensional ImageNet models, we perform a series of optimizations that further improve the query efficiency of our attack leading to 237 times fewer queries against the Inception-v3 model than ZOO. Furthermore, we show that GenAttack can successfully attack some state-of-the-art ImageNet defenses, including ensemble adversarial training and non-differentiable or randomized input transformations. Our results suggest that evolutionary algorithms open up a promising area of research into effective black-box attacks."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Usually, attacks are constructed for the specific input (e.g. photo of some object). This is called an input-aware attack. Adversarial attacks are called universal when one successful adversarial perturbation can be applied for any image @cite_23 . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2903272733",
"2951954400",
"2738229973",
"2797328537"
],
"abstract": [
"Standard adversarial attacks change the predicted class label of an image by adding specially tailored small perturbations to its pixels. In contrast, a universal perturbation is an update that can be added to any image in a broad class of images, while still changing the predicted class label. We study the efficient generation of universal adversarial perturbations, and also efficient methods for hardening networks to these attacks. We propose a simple optimization-based universal attack that reduces the top-1 accuracy of various network architectures on ImageNet to less than 20 , while learning the universal perturbation 13X faster than the standard method. To defend against these perturbations, we propose universal adversarial training, which models the problem of robust classifier generation as a two-player min-max game. This method is much faster and more scalable than conventional adversarial training with a strong adversary (PGD), and yet yields models that are extremely resistant to universal attacks, and comparably resistant to standard (per-instance) black box attacks. We also discover a rather fascinating side-effect of universal adversarial training: attacks built for universally robust models transfer better to other (black box) models than those built with conventional adversarial training.",
"When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp norm distortion as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results onMNIST, CIFAR-10, and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions)through adversarial saliency map (, 2016b) and class activation map(, 2016).",
"State-of-the-art object recognition Convolutional Neural Networks (CNNs) are shown to be fooled by image agnostic perturbations, called universal adversarial perturbations. It is also observed that these perturbations generalize across multiple networks trained on the same target data. However, these algorithms require training data on which the CNNs were trained and compute adversarial perturbations via complex optimization. The fooling performance of these approaches is directly proportional to the amount of available training data. This makes them unsuitable for practical attacks since its unreasonable for an attacker to have access to the training data. In this paper, for the first time, we propose a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition. We further show that these perturbations are transferable across multiple network architectures trained either on same or different data. In the absence of data, our method generates universal adversarial perturbations efficiently via fooling the features learned at multiple layers thereby causing CNNs to misclassify. Experiments demonstrate impressive fooling rates and surprising transferability for the proposed universal perturbations generated without any training data.",
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Although adversarial attacks are quite successful in the digital domain (where we can change the image on the pixel level before feeding it to a classifier), in the physical (i.e. real) world the efficiency of adversarial attacks is still questionable. Kurakin demonstrate the potential for further research in this domain @cite_5 . They discovered that if an adversarial image is printed on the paper and then shot by a camera phone it still can successfully fool classification network. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2797328537",
"2903785932",
"2906208681",
"2890883923"
],
"abstract": [
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.",
"We consider adversarial examples in the black-box decision-based scenario. Here, an attacker has access to the final classification of a model, but not its parameters or softmax outputs. Most attacks for this scenario are based either on transferability, which is unreliable, or random sampling, which is often slow. Focusing on the latter, we propose to improve the efficiency of sampling-based attacks with prior beliefs about the target domain. We identify two such priors, image frequency and surrogate gradients, and discuss how to integrate them into a unified sampling procedure. We then formulate the Biased Boundary Attack, which achieves a drastic speedup over the original Boundary Attack. We demonstrate the effectiveness of our approach against an ImageNet classifier. We also showcase a targeted attack for the Google Cloud Vision API, where we craft convincing examples with just a few hundred queries. Finally, we demonstrate that our approach outperforms the state of the art when facing strong defenses: Our attack scored second place in the targeted attack track of the NeurIPS 2018 Adversarial Vision Challenge.",
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. Code related to this paper is available at: https: github.com shangtse robust-physical-attack."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | It turns out that the most successful paradigm to construct the real-world adversarial examples is an Expectation Over Transformation (EOT) algorithm @cite_28 . This approach takes into account that in the real world the object usually undergoes a set of transformations (scaling, jittering, brightness and contrast changes, etc). The task is to find an adversarial example which is robust under this set of transformations @math and can be formulated as follows: | {
"cite_N": [
"@cite_28"
],
"mid": [
"2906586812",
"2736899637",
"2795995837",
"2964077693"
],
"abstract": [
"In this paper, we proposed the first practical adversarial attacks against object detectors in realistic situations: the adversarial examples are placed in different angles and distances, especially in the long distance (over 20m) and wide angles 120 degree. To improve the robustness of adversarial examples, we proposed the nested adversarial examples and introduced the image transformation techniques. Transformation methods aim to simulate the variance factors such as distances, angles, illuminations, etc., in the physical world. Two kinds of attacks were implemented on YOLO V3, a state-of-the-art real-time object detector: hiding attack that fools the detector unable to recognize the object, and appearing attack that fools the detector to recognize the non-existent object. The adversarial examples are evaluated in three environments: indoor lab, outdoor environment, and the real road, and demonstrated to achieve the success rate up to 92.4 based on the distance range from 1m to 25m. In particular, the real road testing of hiding attack on a straight road and a crossing road produced the success rate of 75 and 64 respectively, and the appearing attack obtained the success rates of 63 and 81 respectively, which we believe, should catch the attention of the autonomous driving community.",
"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 ."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | Another work with the usage of @math -limited attacks proposes to attack facial recognition neural nets with the adversarial eyeglasses @cite_21 . The authors propose a method to print adversarial perturbation on the eyeglasses frame with the help of Total Variation (TV) loss and non-printability score (NPS). TV loss is designed to make the image more smooth. Thus it makes an attack more stable for different image interpolation methods on the devices and makes it more inconspicuousness for human. NPS is designed to deal with the difference in digital RGB-values and the ability of real printers to reproduce these values. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2797328537",
"2479644247",
"2884519271",
"2964006983"
],
"abstract": [
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80 corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.",
"Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution(DE). It requires less adversarial information(a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 68.36 of the natural images in CIFAR-10 test dataset and 41.22 of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22 and 5.52 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | In general, most of the subsequent works for the real-world attack use the concepts of @math -limited perturbation, EOT, TV loss, and NPS. Let us briefly list them. In @cite_40 the authors construct the physical attack for the traffic sign recognition model using EOT and NPS for making either adversarial posters (attacking the whole traffic sign area) or adversarial stickers (black and white stickers on the real traffic sign). The works of @cite_1 @cite_15 use some form of EOT to attack traffic sign recognition model too. | {
"cite_N": [
"@cite_40",
"@cite_1",
"@cite_15"
],
"mid": [
"2884519271",
"2759471388",
"2798302089",
"2783882201"
],
"abstract": [
"Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.",
"Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier.",
"Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle (field test) for the target classifier.",
"We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions, and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95 in the physical as well as virtual settings."
]
} |
1908.08705 | 2969664989 | In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions. To create an attack, we print the rectangular paper sticker on a common color printer and put it on the hat. The adversarial sticker is prepared with a novel algorithm for off-plane transformations of the image which imitates sticker location on the hat. Such an approach confuses the state-of-the-art public Face ID model LResNet100E-IR, ArcFace@ms1m-refine-v2 and is transferable to other Face ID models. | A number of works are devoted to adversarial attacks on traffic sign detectors in the real world. One of the first works @cite_22 proposes an adversarial attack on Faster R-CNN @cite_35 stop sign detector using a sort of EOT (handcrafted estimation of a viewing map). Several works used EOT, NPS, and TV loss to attack Faster R-CNN, YOLOv2 @cite_42 based traffic sign recognition models @cite_19 @cite_45 @cite_8 . | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_8",
"@cite_42",
"@cite_19",
"@cite_45"
],
"mid": [
"2884519271",
"2906586812",
"2741933435",
"2797328537"
],
"abstract": [
"Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.",
"In this paper, we proposed the first practical adversarial attacks against object detectors in realistic situations: the adversarial examples are placed in different angles and distances, especially in the long distance (over 20m) and wide angles 120 degree. To improve the robustness of adversarial examples, we proposed the nested adversarial examples and introduced the image transformation techniques. Transformation methods aim to simulate the variance factors such as distances, angles, illuminations, etc., in the physical world. Two kinds of attacks were implemented on YOLO V3, a state-of-the-art real-time object detector: hiding attack that fools the detector unable to recognize the object, and appearing attack that fools the detector to recognize the non-existent object. The adversarial examples are evaluated in three environments: indoor lab, outdoor environment, and the real road, and demonstrated to achieve the success rate up to 92.4 based on the distance range from 1m to 25m. In particular, the real road testing of hiding attack on a straight road and a crossing road produced the success rate of 75 and 64 respectively, and the appearing attack obtained the success rates of 63 and 81 respectively, which we believe, should catch the attention of the autonomous driving community.",
"Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world--they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100 of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100 of the testing conditions.",
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems."
]
} |