aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.10599
2963790895
Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the *Conjugate Kernel*, CK, (also called the *Neural Network-Gaussian Process Kernel*), and the *Neural Tangent Kernel*, NTK. Roughly, the CK and the NTK tell us respectively "what a network looks like at initialization"and "what a network looks like during and after training." Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at this http URL.
@cite_2 presents a common framework, known as , unifying the GP, NTK, signal propagation, and random matrix perspectives, as well as extending them to new scenarios, like recurrent neural networks. It proves the existence of and allows the computation of a large number of infinite-width limits (including ones relevant to the above perspectives) by expressing the quantity of interest as the output of a computation graph and then manipulating the graph mechanically.
{ "cite_N": [ "@cite_2" ], "mid": [ "2149273154", "2950743785", "2963187627", "1757513105" ], "abstract": [ "Gaussian process (GP) models are very popular for machine learning and regression and they are widely used to account for spatial or temporal relationships between multivariate random variables. In this paper, we propose a general formulation of underdetermined source separation as a problem involving GP regression. The advantage of the proposed unified view is first to describe the different underdetermined source separation problems as particular cases of a more general framework. Second, it provides a flexible means to include a variety of prior information concerning the sources such as smoothness, local stationarity or periodicity through the use of adequate covariance functions. Third, given the model, it provides an optimal solution in the minimum mean squared error (MMSE) sense to the source separation problem. In order to make the GP models tractable for very large signals, we introduce framing as a GP approximation and we show that computations for regularly sampled and locally stationary GPs can be done very efficiently in the frequency domain. These findings establish a deep connection between GP and nonnegative tensor factorizations (NTF) with the Itakura-Saito distance and lead to effective methods to learn GP hyperparameters for very large and regularly sampled signals.", "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function @math (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function @math follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.", "Abstract: Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.", "This paper is motivated by two applications, namely i) generalizations of cuckoo hashing, a computationally simple approach to assigning keys to objects, and ii) load balancing in content distribution networks, where one is interested in determining the impact of content replication on performance. These two problems admit a common abstraction: in both scenarios, performance is characterized by the maximum weight of a generalization of a matching in a bipartite graph, featuring node and edge capacities. Our main result is a law of large numbers characterizing the asymptotic maximum weight matching in the limit of large bipartite random graphs, when the graphs admit a local weak limit that is a tree. This result specializes to the two application scenarios, yielding new results in both contexts. In contrast with previous results, the key novelty is the ability to handle edge capacities with arbitrary integer values. An analysis of belief propagation algorithms (BP) with multivariate belief vectors underlies the proof. In particular, we show convergence of the corresponding BP by exploiting monotonicity of the belief vectors with respect to the so-called upshifted likelihood ratio stochastic order. This auxiliary result can be of independent interest, providing a new set of structural conditions which ensure convergence of BP." ] }
1907.10594
2964263576
The health effects of air pollution have been subject to intense study in recent decades. Exposure to pollutants such as airborne particulate matter and ozone has been associated with increases in morbidity and mortality, especially with regards to respiratory and cardiovascular diseases. Unfortunately, individuals do not have readily accessible methods by which to track their exposure to pollution. This paper proposes how pollution parameters like CO, NO2, O3, PM2.5, PM10 and SO2 can be monitored for respiratory and cardiovascular personalized health during outdoor exercise events. Using location tracked activities, we synchronize them to public data sets of pollution sensors. For improved accuracy in estimation, we use heart rate data to understand breathing volume mapped with the local air quality sensors via constant GPS tracking.
Breathing rate and tidal volume are estimated from heart rate during exercise. The breathing rate and tidal volume vary in response to metabolic demand and increase in physical activity @cite_5 .First, the tidal volume of the individual is calculated which is the lung volume representing the normal volume of air displaced between normal inhalation and exhalation. Tidal volume is calculated from the ideal body weight which requires the height, sex and the age of the individual. From the ideal volume the tidal volume is calculated @cite_1 . An increase in ventilation can be affected by an increase in both the depth and frequency of breathing. Thus due to the respiratory frequency increasing, and the tidal volume being at it's peak the total intake of the pollutants is calculated @cite_4 .
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_4" ], "mid": [ "2597070792", "2069615521", "2531425913", "2767492110" ], "abstract": [ "Abstract Traditional approaches to mechanical ventilation use tidal volumes of 10 to 15 ml per kilogram of body weight and may cause stretch-induced lung injury in patients with acute lung injury and the acute respiratory distress syndrome. We therefore conducted a trial to determine whether ventilation with lower tidal volumes would improve the clinical outcomes in these patients. Patients with acute lung injury and the acute respiratory distress syndrome were enrolled in a multicenter, randomized trial. The trial compared traditional ventilation treatment, which involved an initial tidal volume of 12 ml per kilogram of predicted body weight and an airway pressure measured after a 0.5-second pause at the end of inspiration (plateau pressure) of 50 cm of water or less, with ventilation with a lower tidal volume, which involved an initial tidal volume of 6 ml per kilogram of predicted body weight and a plateau pressure of 30 cm of water or less. The primary outcomes were death before a patient was discharged home and was breathing without assistance and the number of days without ventilator use from day 1 to day 28. The trial was stopped after the enrollment of 861 patients because mortality was lower in the group treated with lower tidal volumes than in the group treated with traditional tidal volumes (31.0 percent vs. 39.8 percent, P=0.007), and the number of days without ventilator use during the first 28 days after randomization was greater in this group (mean [+ -SD], 12+ -11 vs. 10+ -11; P=0.007). The mean tidal volumes on days 1 to 3 were 6.2+ -0.8 and 11.8+ -0.8 ml per kilogram of predicted body weight (P", "Background: The power of breathing (PoB) is used to estimate the mechanical workload of the respiratory system. Aim of this study was to investigate the effect of different tidal volume-respiratory rate combinations on the PoB when the elastic load is constant. In order to assure strict control of the experimental conditions, the PoB was calculated on an airway pressure-volume curve in mechanically ventilated patients. Methods: Ten patients received three different tidal volume-respiratory rate combinations while minute ventilation was constant. Respiratory mechanics, PoB and its elastic and resistive components were calculated. Alternative methods to estimate the elastic workload were assessed: elastic work of breathing per litre per minute, elastic workload index (the square root of elastic work of breathing multiplied by respiratory rate) and elastic double product of the respiratory system (the elastic pressure multiplied by respiratory rate). Results: Despite constant elastance and minute ventilation, the elastic PoB showed an increment greater than 200 from the lower to the greater tidal volume, accounting for approximately 80 of the whole PoB increment. On the contrary, elastic work of breathing per litre per minute, elastic workload index and elastic double product did not change. Conclusion: Changes in breathing pattern markedly affect the PoB despite constant mechanical load. Other indexes could assess the elastic workload without tidal volume dependence. Power of breathing use should be avoided to compare different mechanical loads or efficiencies of the respiratory muscles when tidal volume is variable.", "The aim was to determine whether the midpoint between ventilatory thresholds (MPVT) corresponds to maximal lactate steady state (MLSS). Twelve amateur cyclists (21.0 ± 2.6 years old; 72.2 ± 9.0 kg; 179.8 ± 7.5 cm) performed an incremental test (25 W·min-1) until exhaustion and several constant load tests of 30 minutes to determine MLSS, on different occasions. Using MLSS determination as the reference method, the agreement with five other parameters (MPVT; first and second ventilatory thresholds: VT1 and VT2; respiratory exchange ratio equal to 1: RER = 1.00; and Maximum) was analysed by the Bland-Altman method. The difference between workload at MLSS and VT1, VT2, RER=1.00 and Maximum was 31.1 ± 20.0, -86.0 ± 18.3, -63.6 ± 26.3 and -192.3 ± 48.6 W, respectively. MLSS was underestimated from VT1 and overestimated from VT2, RER = 1.00 and Maximum. The smallest difference (-27.5 ± 15.1 W) between workload at MLSS and MPVT was in better agreement than other analysed parameters of intensity in cycling. The main finding is that MPVT approached the workload at MLSS in amateur cyclists, and can be used to estimate maximal steady state.", "Abstract With the introduction of the large number of fitness devices on the market, there are numerous possibilities for their use in managing chronic diseases in older adults. For example, monitoring people with dementia using commercially available devices that measure heart rate, breathing rate, lung volume, step count, and activity level could be used to predict episodic behavioral and psychological symptoms before they become distressing or disruptive. However, since these devices are designed primarily for fitness assessment, validation of the sensors in a controlled environment with the target cohort population is needed. In this study, we present validation results using a commercial fitness tracker, the Hexoskin sensor vest, with thirty-one participants aged 65 and older. Estimated physiological measures investigated in this study are heart rate, breathing rate, lung volume, step count, and activity level of the participants. Findings indicate that while the processed step count, heart rate, and breathing rate show strong correlations to the clinically accepted gold standard values, lung volume and activity level do not. This indicates the need to proceed cautiously when making clinical decisions using such sensors, and suggests that users should focus on the three strongly correlated parameters for further analysis, at least in the older population. The use of physiological measurement devices such as the Hexoskin may eventually become a non-intrusive way to continuously assess physiological measures in older adults with dementia who are at risk for distressing behavioral and psychological symptoms." ] }
1907.10594
2964263576
The health effects of air pollution have been subject to intense study in recent decades. Exposure to pollutants such as airborne particulate matter and ozone has been associated with increases in morbidity and mortality, especially with regards to respiratory and cardiovascular diseases. Unfortunately, individuals do not have readily accessible methods by which to track their exposure to pollution. This paper proposes how pollution parameters like CO, NO2, O3, PM2.5, PM10 and SO2 can be monitored for respiratory and cardiovascular personalized health during outdoor exercise events. Using location tracked activities, we synchronize them to public data sets of pollution sensors. For improved accuracy in estimation, we use heart rate data to understand breathing volume mapped with the local air quality sensors via constant GPS tracking.
The cigarette equivalent is derived from both the pollution intake and the tidal volume, where depending on the percentage of each of the pollutant intake, the total summation gives the result. Here the cigarette equivalent is used to eliminate the confusion, by converting real-time air quality data from pollution levels into the equivalent number of cigarettes smoked over time. We use this for semantic understanding by the user. Since inhaling cigarette smoke has been shown to produce acute changes in the lung including alterations in resistance to airflow, cough, and irritation of the airway, the early stage of smoking might affect the respiratory function, converting the air quality to this standard gives a better insight of how the pollution affects any individual @cite_11 @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_11" ], "mid": [ "2343449564", "2556269267", "2135548528", "2597070792" ], "abstract": [ "Abstract Background Although the health effects of long term exposure to air pollution are well established, it is difficult to effectively communicate the health risks of this (largely invisible) risk factor to the public and policy makers. The purpose of this study is to develop a method that expresses the health effects of air pollution in an equivalent number of daily passively smoked cigarettes. Methods Defined changes in PM2.5, nitrogen dioxide (NO 2 ) and Black Carbon (BC) concentration were expressed into number of passively smoked cigarettes, based on equivalent health risks for four outcome measures: Low Birth Weight ( 1 ), cardiovascular mortality and lung cancer. To describe the strength of the relationship with ETS and air pollutants, we summarized the epidemiological literature using published or new meta-analyses. Results Realistic increments of 10 µg m 3 in PM2.5 and NO 2 concentration and a 1 µg m 3 increment in BC concentration correspond to on average (standard error in parentheses) 5.5 (1.6), 2.5 (0.6) and 4.0 (1.2) passively smoked cigarettes per day across the four health endpoints, respectively. The uncertainty reflects differences in equivalence between the health endpoints and uncertainty in the concentration response functions. The health risk of living along a major freeway in Amsterdam is, compared to a counterfactual situation with ‘clean’ air, equivalent to 10 daily passively smoked cigarettes.. Conclusions We developed a method that expresses the health risks of air pollution and the health benefits of better air quality in a simple, appealing manner. The method can be used both at the national regional and the local level. Evaluation of the usefulness of the method as a communication tool is needed.", "Smoking is known to be one of the main causes for premature deaths. A reliable smoking detection method can enable applications for an insight into a user's smoking behaviour and for use in smoking cessation programs. However, it is difficult to accurately detect smoking because it can be performed in various postures or in combination with other activities, it is less-repetitive, and it may be confused with other similar activities, such as drinking and eating. In this paper, we propose to use a two-layer hierarchical smoking detection algorithm (HLSDA) that uses a classifier at the first layer, followed by a lazy context-rule-based correction method that utilizes neighbouring segments to improve the detection. We evaluated our algorithm on a dataset of 45 hours collected over a three month period where 11 participants performed 17 hours (230 cigarettes) of smoking while sitting, standing, walking, and in a group conversation. The rest of 28 hours consists of other similar activities, such as eating, and drinking. We show that our algorithm improves recall as well as precision for smoking compared to a single layer classification approach. For smoking activity, we achieve an F-measure of 90-97 in person-dependent evaluations and 83-94 in person-independent evaluations. In most cases, our algorithm corrects up to 50 of the misclassified smoking segments. Our algorithm also improves the detection of eating and drinking in a similar way. We make our dataset and data logger publicly available for the reproducibility of our work.", "Combustion emissions adversely impact air quality and human health. A multiscale air quality model is applied to assess the health impacts of major emissions sectors in United States. Emissions are classified according to six different sources: electric power generation, industry, commercial and residential sources, road transportation, marine transportation and rail transportation. Epidemiological evidence is used to relate long-term population exposure to sector-induced changes in the concentrations of PM2.5 and ozone to incidences of premature death. Total combustion emissions in the U.S. account for about 200,000 (90 CI: 90,000e362,000) premature deaths per year in the U.S. due to changes in PM2.5 concentrations, and about 10,000 (90 CI: \" 1000 to 21,000) deaths due to changes in ozone concentrations. The largest contributors for both pollutant-related mortalities are road transportation, causing w53,000 (90 CI: 24,000e95,000) PM2.5-related deaths and w5000 (90 CI: \" 900 to 11,000) ozonerelated early deaths per year, and power generation, causing w52,000 (90 CI: 23,000e94,000) PM2.5related and w2000 (90 CI: \" 300 to 4000) ozone-related premature mortalities per year. Industrial emissions contribute to w41,000 (90 CI: 18,000e74,000) early deaths from PM2.5 and w2000 (90 CI: 0 e4000) early deaths from ozone. The results are indicative of the extent to which policy measures could be undertaken in order to mitigate the impact of specific emissions from different sectors d in particular black carbon emissions from road transportation and sulfur dioxide emissions from power generation.", "Abstract Traditional approaches to mechanical ventilation use tidal volumes of 10 to 15 ml per kilogram of body weight and may cause stretch-induced lung injury in patients with acute lung injury and the acute respiratory distress syndrome. We therefore conducted a trial to determine whether ventilation with lower tidal volumes would improve the clinical outcomes in these patients. Patients with acute lung injury and the acute respiratory distress syndrome were enrolled in a multicenter, randomized trial. The trial compared traditional ventilation treatment, which involved an initial tidal volume of 12 ml per kilogram of predicted body weight and an airway pressure measured after a 0.5-second pause at the end of inspiration (plateau pressure) of 50 cm of water or less, with ventilation with a lower tidal volume, which involved an initial tidal volume of 6 ml per kilogram of predicted body weight and a plateau pressure of 30 cm of water or less. The primary outcomes were death before a patient was discharged home and was breathing without assistance and the number of days without ventilator use from day 1 to day 28. The trial was stopped after the enrollment of 861 patients because mortality was lower in the group treated with lower tidal volumes than in the group treated with traditional tidal volumes (31.0 percent vs. 39.8 percent, P=0.007), and the number of days without ventilator use during the first 28 days after randomization was greater in this group (mean [+ -SD], 12+ -11 vs. 10+ -11; P=0.007). The mean tidal volumes on days 1 to 3 were 6.2+ -0.8 and 11.8+ -0.8 ml per kilogram of predicted body weight (P" ] }
1907.10416
2963631953
This paper proposes a method to guide tensor factorization, using class labels. Furthermore, it shows the advantages of using the proposed method in identifying nodes that play a special role in multi-relational networks, e.g. spammers. Most complex systems involve multiple types of relationships and interactions among entities. Combining information from different relationships may be crucial for various prediction tasks. Instead of creating distinct prediction models for each type of relationship, in this paper we present a tensor factorization approach based on RESCAL, which collectively exploits all existing relations. We extend RESCAL to produce a semi-supervised factorization method that combines a classification error term with the standard factor optimization process. The coupled optimization approach, models the tensorial data assimilating observed information from all the relations, while also taking into account classification performance. Our evaluation on real-world social network data shows that incorporating supervision, when available, leads to models that are more accurate.
A recent approach to tensor factorization is RESCAL @cite_16 which achieves high predictive performance in the task of link prediction. RESCAL, which we will describe in more detail in Section , uses a unique latent representation for entities.
{ "cite_N": [ "@cite_16" ], "mid": [ "2158781217", "2962850650", "2147512299", "2520495422" ], "abstract": [ "Tensor factorization has become a popular method for learning from multi-relational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions under which factorization is an efficient approach for learning from relational data, we derive upper and lower bounds on the rank required to recover adjacency tensors. Based on our findings, we propose a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and present a scalable algorithm for computing the factorization. We show experimentally both that the proposed additive model does improve the predictive performance over pure latent variable methods and that it also reduces the required rank — and therefore runtime and memory complexity — significantly.", "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links between the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement (which we call SimplE) of CP to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge in terms of logical rules can be incorporated into these embeddings through weight tying. We prove SimplE is fully-expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques.", "CANDECOMP PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.", "Given a high-order large-scale tensor, how can we decompose it into latent factors? Can we process it on commodity computers with limited memory? These questions are closely related to recommender systems, which have modeled rating data not as a matrix but as a tensor to utilize contextual information such as time and location. This increase in the order requires tensor factorization methods scalable with both the order and size of a tensor. In this paper, we propose two distributed tensor factorization methods, CDTF and SALS . Both methods are scalable with all aspects of data and show a trade-off between convergence speed and memory requirements. CDTF , based on coordinate descent, updates one parameter at a time, while SALS generalizes on the number of parameters updated at a time. In our experiments, only our methods factorized a five-order tensor with 1 billion observable entries, 10 M mode length, and 1 K rank, while all other state-of-the-art methods failed. Moreover, our methods required several orders of magnitude less memory than their competitors. We implemented our methods on MapReduce with two widely-applicable optimization techniques: local disk caching and greedy row assignment. They speeded up our methods up to 98.2 @math and also the competitors up to 5.9 @math ." ] }
1907.10416
2963631953
This paper proposes a method to guide tensor factorization, using class labels. Furthermore, it shows the advantages of using the proposed method in identifying nodes that play a special role in multi-relational networks, e.g. spammers. Most complex systems involve multiple types of relationships and interactions among entities. Combining information from different relationships may be crucial for various prediction tasks. Instead of creating distinct prediction models for each type of relationship, in this paper we present a tensor factorization approach based on RESCAL, which collectively exploits all existing relations. We extend RESCAL to produce a semi-supervised factorization method that combines a classification error term with the standard factor optimization process. The coupled optimization approach, models the tensorial data assimilating observed information from all the relations, while also taking into account classification performance. Our evaluation on real-world social network data shows that incorporating supervision, when available, leads to models that are more accurate.
As multi-relational data can be efficiently represented by tensors, TripleRank @cite_8 employs tensor factorization in order to rank entities in the context of linked data. TripleRank applies a common tensor factorization CANDECOMP PARAFAC @cite_14 to obtain two factor matrices which correspond to hub and authority scores. Another approach @cite_3 , utilizes tensor factorization for ranking tags in order to provide tag recommendations, using multi-target networks which involves more than one entity types.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_8" ], "mid": [ "2147512299", "2158781217", "1982725637", "2082600181" ], "abstract": [ "CANDECOMP PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.", "Tensor factorization has become a popular method for learning from multi-relational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions under which factorization is an efficient approach for learning from relational data, we derive upper and lower bounds on the rank required to recover adjacency tensors. Based on our findings, we propose a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and present a scalable algorithm for computing the factorization. We show experimentally both that the proposed additive model does improve the predictive performance over pure latent variable methods and that it also reduces the required rank — and therefore runtime and memory complexity — significantly.", "Tensor (multiway array) factorization and decomposition has become an important tool for data mining. Fueled by the computational power of modern computer researchers can now analyze large-scale tensorial structured data that only a few years ago would have been impossible. Tensor factorizations have several advantages over two-way matrix factorizations including uniqueness of the optimal solution and component identification even when most of the data is missing. Furthermore, multiway decomposition techniques explicitly exploit the multiway structure that is lost when collapsing some of the modes of the tensor in order to analyze the data by regular matrix factorization approaches. Multiway decomposition is being applied to new fields every year and there is no doubt that the future will bring many exciting new applications. The aim of this overview is to introduce the basic concepts of tensor decompositions and demonstrate some of the many benefits and challenges of modeling data multiway for a wide variety of data and problem domains. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 24-40 DOI: 10.1002 widm.1", "A novel regularizer of the PARAFAC decomposition factors capturing the tensor's rank is proposed in this paper, as the key enabler for completion of three-way data arrays with missing entries. Set in a Bayesian framework, the tensor completion method incorporates prior information to enhance its smoothing and prediction capabilities. This probabilistic approach can naturally accommodate general models for the data distribution, lending itself to various fitting criteria that yield optimum estimates in the maximum-a-posteriori sense. In particular, two algorithms are devised for Gaussian- and Poisson-distributed data, that minimize the rank-regularized least-squares error and Kullback-Leibler divergence, respectively. The proposed technique is able to recover the “ground-truth” tensor rank when tested on synthetic data, and to complete brain imaging and yeast gene expression datasets with 50 and 15 of missing entries respectively, resulting in recovery errors at -11 dB and -15 dB." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
The particular problem of transport task allocation is called PDP @cite_10 in operations research. See @cite_16 for a survey article. It is defined by a set of agents and a number of requests of certain amount to be transported from one location to another. The problem is often studied with time windows @cite_14 or to optimize vehicle capacity @cite_5 especially in the AGV domain @cite_12 . Currently we do not consider time windows because they are usually not defined in industrial scenarios. Also, we are concerned with the special case of the PDP where every agent has a capacity of one unit, since this models the scenario best. This may be also referred to as dial-a-ride problem @cite_10 .
{ "cite_N": [ "@cite_14", "@cite_5", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2051358678", "2212025445", "2136340918", "1998816621" ], "abstract": [ "In the dial-a-ride problem, users formulate requests for transportation from a specific origin to a specific destination. Transportation is carried out by vehicles providing a shared service. The problem consists of designing a set of minimum-cost vehicle routes satisfying capacity, duration, time window, pairing, precedence, and ride-time constraints. This paper introduces a mixed-integer programming formulation of the problem and a branch-and-cut algorithm. The algorithm uses new valid inequalities for the dial-a-ride problem as well as known valid inequalities for the traveling salesman, the vehicle routing, and the pick-up and delivery problems. Computational experiments performed on randomly generated instances show that the proposed approach can be used to solve small to medium-size instances.", "In the pickup and delivery problem with time windows (PDPTW), vehicles have to transport loads from origins to destinations respecting capacity and time constraints. In this paper, we present a two-phase method to solve the PDPTW. In the first phase, we apply a novel construction heuristics to generate an initial solution. In the second phase, a tabu search method is proposed to improve the solution. Another contribution of this paper is a strategy to generate good problem instances and benchmarking solutions for PDPTW, based on Solomon's benchmark test cases for VRPTW. Experimental results show that our approach yields very good solutions when compared with the benchmarking solutions.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users.", "The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
MAPF is another intensively studied problem in multi-agent systems @cite_33 . The decision in MAPF concerns how a number of agents will be traveling to their goal poses without colliding, also an NP-hard problem @cite_15 . The colored pebble problem is comparable since the color of the pebble makes them not interchangeable @cite_31 . Solving the problem with collision avoidance at runtime can lead to deadlocks especially in narrow environments as discussed by @cite_29 and more recently by @cite_32 . Available sub-optimal solutions to the problem include Local Repair A* @cite_7 , WHCA* @cite_29 and sampling based approaches like Multi-agent RRT* @cite_19 and ARMO @cite_11 . Optimal solvers are ICTS @cite_13 and CBS @cite_30 . Our solution is based on the latter, because it can be extended with task-assignment and can then solve the introduced problem optimally. Previously we elaborated CBS with nonuniform costs in an industrial AGV scenario @cite_22 .
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_33", "@cite_7", "@cite_22", "@cite_29", "@cite_32", "@cite_19", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2167979940", "2962969317", "2739829010", "2100695938" ], "abstract": [ "Multi-agent Pathfinding is a relevant problem in a wide range of domains, for example in robotics and video games research. Formally, the problem considers a graph consisting of vertices and edges, and a set of agents occupying vertices. An agent can only move to an unoccupied, neighbouring vertex, and the problem of finding the minimal sequence of moves to transfer each agent from its start location to its destination is an NP-hard problem. We present Push and Rotate, a new algorithm that is complete for Multi-agent Pathfinding problems in which there are at least two empty vertices. Push and Rotate first divides the graph into subgraphs within which it is possible for agents to reach any position of the subgraph, and then uses the simple push, swap, and rotate operations to find a solution; a post-processing algorithm is also presented that eliminates redundant moves. Push and Rotate can be seen as extending Luna and Bekris's Push and Swap algorithm, which we showed to be incomplete in a previous publication. In our experiments we compare our approach with the Push and Swap, MAPP, and Bibox algorithms. The latter algorithm is restricted to a smaller class of instances as it requires biconnected graphs, but can nevertheless be considered state of the art due to its strong performance. Our experiments show that Push and Swap suffers from incompleteness, MAPP is generally not competitive with Push and Rotate, and Bibox is better than Push and Rotate on randomly generated biconnected instances, while Push and Rotate performs better on grids.", "The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multi-agent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm.", "This paper deals with solving cooperative path finding (CPF) problems in a makespan-optimal way. A feasible solution to the CPF problem lies in the moving of mobile agents where each agent has unique initial and goal positions. The abstraction adopted in CPF assumes that agents are discrete units that move over an undirected graph by traversing its edges. We focus specifically on makespan-optimal solutions to the CPF problem where the task is to generate solutions that are as short as possible in terms of the total number of time steps required for all agents to reach their goal positions. We demonstrate that reducing CPF to propositional satisfiability (SAT) represents a viable way to obtain makespan-optimal solutions. Several ways of encoding CPFs into propositional formulae are proposed and evaluated both theoretically and experimentally. Encodings based on the log and direct representations of decision variables are compared. The evaluation indicates that SAT-based solutions to CPF outperform the makespan-optimal versions of such search-based CPF solvers such as OD+ID, CBS, and ICTS in highly constrained scenarios (i.e., environments that are densely occupied by agents and where interactions among the agents are frequent). Moreover, the experiments clearly show that CPF encodings based on the direct representation of variables can be solved faster, although they are less space-efficient than log encodings.", "Multi-agent path planning is a challenging problem with numerous real-life applications. Running a centralized search such as A* in the combined state space of all units is complete and cost-optimal, but scales poorly, as the state space size is exponential in the number of mobile units. Traditional decentralized approaches, such as FAR and WHCA*, are faster and more scalable, being based on problem decomposition. However, such methods are incomplete and provide no guarantees with respect to the running time or the solution quality. They are not necessarily able to tell in a reasonable time whether they would succeed in finding a solution to a given instance. We introduce MAPP, a tractable algorithm for multi-agent path planning on undirected graphs. We present a basic version and several extensions. They have low-polynomial worst-case upper bounds for the running time, the memory requirements, and the length of solutions. Even though all algorithmic versions are incomplete in the general case, each provides formal guarantees on problems it can solve. For each version, we discuss the algorithm's completeness with respect to clearly defined subclasses of instances. Experiments were run on realistic game grid maps. MAPP solved 99.86 of all mobile units, which is 18-22 better than the percentage of FAR and WHCA*. MAPP marked 98.82 of all units as provably solvable during the first stage of plan computation. Parts of MAPP's computation can be re-used across instances on the same map. Speed-wise, MAPP is competitive or significantly faster than WHCA*, depending on whether MAPP performs all computations from scratch. When data that MAPP can re-use are preprocessed offline and readily available, MAPP is slower than the very fast FAR algorithm by a factor of 2.18 on average. MAPP's solutions are on average 20 longer than FAR's solutions and 7-31 longer than WHCA*'s solutions." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
One problem formulation that is more closely related to transport systems is the TAPF introduced by Ma and Koenig @cite_23 . It first solves the assignment problem and MAPF problem but not concurrently, so the costs used for task allocation are not the true costs. Instead of single goals we consider the whole transport task allocation.
{ "cite_N": [ "@cite_23" ], "mid": [ "2068957801", "2212025445", "1998816621", "2962969317" ], "abstract": [ "The classic optimal transportation problem consists in finding the most cost-effective way of moving masses from one set of locations to another, minimizing its transportation cost. The formulation of this problem and its solution have been useful to understand various mathematical, economical, and control theory phenomena, such as, e.g., Witsenhausen's counterexample in stochastic control theory, the principal-agent problem in microeconomic theory, location and planning problems, etc. In this work, we incorporate the effect of network congestion to the optimal transportation problem and we are able to find a closed form expression for its solution. As an application of our work, we focus on the mobile association problem in cellular networks (the determination of the cells corresponding to each base station). In the continuum setting, this problem corresponds to the determination of the locations at which mobile terminals prefer to connect (by also considering the congestion they generate) to a given base station rather than to other base stations. Two types of problems have been addressed: a global optimization problem for minimizing the total power needed by the mobile terminals over the whole network (global optimum), and a user optimization problem, in which each mobile terminal chooses to which base station to connect in order to minimize its own cost (user equilibrium). This work combines optimal transportation with strategic decision making to characterize both solutions.", "In the pickup and delivery problem with time windows (PDPTW), vehicles have to transport loads from origins to destinations respecting capacity and time constraints. In this paper, we present a two-phase method to solve the PDPTW. In the first phase, we apply a novel construction heuristics to generate an initial solution. In the second phase, a tabu search method is proposed to improve the solution. Another contribution of this paper is a strategy to generate good problem instances and benchmarking solutions for PDPTW, based on Solomon's benchmark test cases for VRPTW. Experimental results show that our approach yields very good solutions when compared with the benchmarking solutions.", "The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances.", "The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multi-agent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
A different type of problem is studied in vehicle routing with capacities @cite_4 , which focuses on the deliveries from one central depot based on certain demands. This is a different problem in the sense that it considers only one origin and additionally considers capacity constraints.
{ "cite_N": [ "@cite_4" ], "mid": [ "2137208273", "1529079830", "1965882376", "2090424605" ], "abstract": [ "In this paper we deal with a vehicle routing problem on a tree-shaped network with a single depot. Customers are located on vertices of the tree, and each customer has a positive demand. Demands of customers are served by a fleet of identical vehicles with limited capacity. It is assumed that the demand of a customer is splittable, i.e., it can be served by more than one vehicle. The problem we are concerned with in this paper asks to find a set of tours of the vehicles with minimum total lengths. Each tour begins at the depot, visits a subset of the customers and returns to the depot without violating the capacity constraint. We show that the problem is NP-complete and propose a 1.5-approximation algorithm for the problem. We also give some computational results.", "This paper presents a new approximation algorithm for a vehicle routing problem on a tree-shaped network with a single depot. Customers are located on vertices of the tree, and each customer has a positive demand. Demands of customers are served by a fleet of identical vehicles with limited capacity. It is assumed that the demand of a customer is splittable, i.e., it can be served by more than one vehicle. The problem we are concerned with in this paper asks to find a set of tours of the vehicles with minimum total lengths. Each tour begins at the depot, visits a subset of the customers and returns to the depot without violating the capacity constraint We propose a 1.35078-approximation algorithm for the problem (exactly, (√41 - 1) 4), which is an improvement over the existing 1.5-approximation.", "The vehicle routing problem with multiple routes consists in determining the routing of a fleet of vehicles when each vehicle can perform multiple routes during its operations day. This problem is relevant in applications where the duration of each route is limited, for example when perishable goods are transported. In this work, we assume that a fixed-size fleet of vehicles is available and that it might not be possible to serve all customer requests, due to time constraints. Accordingly, the objective is first to maximize the number of served customers and then, to minimize the total distance traveled by the vehicles. An adaptive large neighborhood search, exploiting the ruin-and-recreate principle, is proposed for solving this problem. The various destruction and reconstruction operators take advantage of the hierarchical nature of the problem by working either at the customer, route or workday level. Computational results on Euclidean instances, derived from well-known benchmark instances, demonstrate the benefits of this multi-level approach.", "In the capacitated vehicle routing problem, introduced by Dantzig and Ramser in 1959, we are given the locations of n customers and a depot, along with a vehicle of capacity k, and wish to find a minimum length collection of tours, each starting from the depot and visiting at most k customers, whose union covers all the customers. We give a quasi-polynomial time approximation scheme for the setting where the customers and the depot are on the plane, and distances are given by the Euclidean metric." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
The joint solution of MATA and MAPF , that we are proposing, was previously studied in @cite_9 , where the problem is considered as MILP but due to collisions on the single-agent path level we think it should be considered a MINLP Problem. Therefore, the solver proposed by @cite_9 can not solve the problem optimally since it ignores agent-agent collisions.
{ "cite_N": [ "@cite_9" ], "mid": [ "2739829010", "2962969317", "2044113147", "2482025661" ], "abstract": [ "This paper deals with solving cooperative path finding (CPF) problems in a makespan-optimal way. A feasible solution to the CPF problem lies in the moving of mobile agents where each agent has unique initial and goal positions. The abstraction adopted in CPF assumes that agents are discrete units that move over an undirected graph by traversing its edges. We focus specifically on makespan-optimal solutions to the CPF problem where the task is to generate solutions that are as short as possible in terms of the total number of time steps required for all agents to reach their goal positions. We demonstrate that reducing CPF to propositional satisfiability (SAT) represents a viable way to obtain makespan-optimal solutions. Several ways of encoding CPFs into propositional formulae are proposed and evaluated both theoretically and experimentally. Encodings based on the log and direct representations of decision variables are compared. The evaluation indicates that SAT-based solutions to CPF outperform the makespan-optimal versions of such search-based CPF solvers such as OD+ID, CBS, and ICTS in highly constrained scenarios (i.e., environments that are densely occupied by agents and where interactions among the agents are frequent). Moreover, the experiments clearly show that CPF encodings based on the direct representation of variables can be solved faster, although they are less space-efficient than log encodings.", "The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multi-agent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm.", "This paper addresses make span optimal solving of cooperative path-finding problem (CPF) by translating it to propositional satisfiability (SAT). The task is to relocate set of agents to given goal positions so that they do not collide with each other. A novel SAT encoding of CPF is suggested. The novel encoding uses the concept of matching in a bipartite graph to separate spatial constraint of CPF from consideration of individual agents. The separation allowed reducing the size of encoding significantly. The conducted experimental evaluation shown that novel encoding can be solved faster than existing encodings for CPF and also that the SAT based methods dominates over A based methods in environment densely occupied by agents.", "We study the TAPF (combined target-assignment and path-finding) problem for teams of agents in known terrain, which generalizes both the anonymous and non-anonymous multi-agent path-finding problems. Each of the teams is given the same number of targets as there are agents in the team. Each agent has to move to exactly one target given to its team such that all targets are visited. The TAPF problem is to first assign agents to targets and then plan collision-free paths for the agents to their targets in a way such that the makespan is minimized. We present the CBM (Conflict-Based Min-Cost-Flow) algorithm, a hierarchical algorithm that solves TAPF instances optimally by combining ideas from anonymous and non-anonymous multi-agent path-finding algorithms. On the low level, CBM uses a min-cost max-flow algorithm on a time-expanded network to assign all agents in a single team to targets and plan their paths. On the high level, CBM uses conflict-based search to resolve collisions among agents in different teams. Theoretically, we prove that CBM is correct, complete and optimal. Experimentally, we show the scalability of CBM to TAPF instances with dozens of teams and hundreds of agents and adapt it to a simulated warehouse system." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
A similar problem, by taking uncertainties into account, aims at applications in highly dynamic environments @cite_2 . The solution is also sub-optimal because agent-agent collisions are not considered at planning time.
{ "cite_N": [ "@cite_2" ], "mid": [ "2082585576", "2339343364", "2524264252", "2141256287" ], "abstract": [ "In this paper, we study the safe navigation of a mobile robot through crowds of dynamic agents with uncertain trajectories. Existing algorithms suffer from the “freezing robot” problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing the predictive uncertainty for individual agents by employing more informed models or heuristically limiting the predictive covariance to prevent this overcautious behavior. In this work, we demonstrate that both the individual prediction and the predictive uncertainty have little to do with the frozen robot problem. Our key insight is that dynamic agents solve the frozen robot problem by engaging in “joint collision avoidance”: They cooperatively make room to create feasible trajectories. We develop IGP, a nonparametric statistical model based on dependent output Gaussian processes that can estimate crowd interaction from data. Our model naturally captures the non-Markov nature of agent trajectories, as well as their goal-driven navigation. We then show how planning in this model can be efficiently implemented using particle based inference. Lastly, we evaluate our model on a dataset of pedestrians entering and leaving a building, first comparing the model with actual pedestrians, and find that the algorithm either outperforms human pedestrians or performs very similarly to the pedestrians. We also present an experiment where a covariance reduction method results in highly overcautious behavior, while our model performs desirably.", "In sequential decision-making problems under uncertainty, an agent makes decisions, one after another, considering the current state of the environment where she evolves. In most work, the environment the agent evolves in is assumed to be stationary, i.e., its dynamics do not change over time. However, the stationarity hypothesis can be invalid if, for instance, exogenous events can occur. In this document, we are interested in sequential decision-making in non-stationary environments. We propose a new model named HS3MDP, allowing us to represent non-stationary problems whose dynamics evolve among a finite set of contexts. In order to efficiently solve those problems, we adapt the POMCP algorithm to HS3MDPs. We also present RLCD with SCD, a new method to learn the dynamics of the environments, without knowing a priori the number of contexts. We then explore the field of argumentation problems, where few works consider sequential decision-making. We address two types of problems: stochastic debates (APS ) and mediation problems with non-stationary agents (DMP). In this work, we present a model formalizing APS and allowing us to transform them into an MOMDP in order to optimize the sequence of arguments of one agent in the debate. We then extend this model to DMPs to allow a mediator to strategically organize speak-turns in a debate.", "Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent's joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents' motion. Simulation results show more than 26 percent improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy.", "In many real-life optimization problems involving multiple agents, the rewards are not necessarily known exactly in advance, but rather depend on sources of exogenous uncertainty. For instance, delivery companies might have to coordinate to choose who should serve which foreseen customer, under uncertainty in the locations of the customers. The framework of Distributed Constraint Optimization under Stochastic Uncertainty was proposed to model such problems; in this paper, we generalize this formalism by introducing the concept of evaluation functions that model various optimization criteria. We take the example of three such evaluation functions, expectation, consensus, and robustness, and we adapt and generalize two previous algorithms accordingly. Our experimental results on a class of Vehicle Routing Problems show that incomplete algorithms are not only cheaper than complete ones (in terms of simulated time, Non-Concurrent Constraint Checks, and information exchange), but they are also often able to find the optimal solution. We also show that exchanging more information about the dependencies of their respective cost functions on the sources of uncertainty can help the agents discover higher-quality solutions." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
Similar problems have recently also been formulated by @cite_20 , @cite_18 and @cite_8 for domains that do not involve transport tasks as we consider them. Both find interesting sub-optimal solutions for TPTS @cite_18 , based on DCOP @cite_20 and using answer set programming (ASP) @cite_8 . All of these approaches may solve the sub-problems optimally but they do not solve the combined problem optimally. This would require taking the implications that the task assignment makes into the path finding problem and vice versa.
{ "cite_N": [ "@cite_18", "@cite_20", "@cite_8" ], "mid": [ "2212025445", "1507209274", "2555632090", "1977133702" ], "abstract": [ "In the pickup and delivery problem with time windows (PDPTW), vehicles have to transport loads from origins to destinations respecting capacity and time constraints. In this paper, we present a two-phase method to solve the PDPTW. In the first phase, we apply a novel construction heuristics to generate an initial solution. In the second phase, a tabu search method is proposed to improve the solution. Another contribution of this paper is a strategy to generate good problem instances and benchmarking solutions for PDPTW, based on Solomon's benchmark test cases for VRPTW. Experimental results show that our approach yields very good solutions when compared with the benchmarking solutions.", "We study the unsplittable flow on a path problem (UFP), which arises naturally in many applications such as bandwidth allocation, job scheduling, and caching. Here we are given a path with nonnegative edge capacities and a set of tasks, which are characterized by a subpath, a demand, and a profit. The goal is to find the most profitable subset of tasks whose total demand does not violate the edge capacities. Not surprisingly, this problem has received a lot of attention in the research community. If the demand of each task is at most a small enough fraction δ of the capacity along its subpath (δ-small tasks), then it has been known for a long time [, ICALP 2003] how to compute a solution of value arbitrarily close to the optimum via LP rounding. However, much remains unknown for the complementary case, that is, when the demand of each task is at least some fraction δ > 0 of the smallest capacity of its subpath (δ-large tasks). For this setting a constant factor approximation, improving on an earlier logarithmic approximation, was found only recently [, FOCS 2011]. In this paper we present a PTAS for δ-large tasks, for any constant δ > 0. Key to this result is a complex geometrically inspired dynamic program. Each task is represented as a segment underneath the capacity curve, and we identify a proper maze-like structure so that each corridor of the maze is crossed by only O(1) tasks in the optimal solution. The maze has a tree topology, which guides our dynamic program. Our result implies a 2 + e approximation for UFP, for any constant e > 0, improving on the previously best 7 + e approximation by We remark that our improved approximation algorithm matches the best known approximation ratio for the considerably easier special case of uniform edge capacities.", "We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling @math but do not address the problem of learning the underlying transport map @math linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it to out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing.", "In this paper we review the exact algorithms proposed in the last three decades for the solution of the vehicle routing problem with time windows (VRPTW). The exact algorithms for the VRPTW are in many aspects inherited from work on the traveling salesman problem (TSP). In recognition of this fact this paper is structured relative to four seminal papers concerning the formulation and exact solution of the TSP, i.e. the arc formulation, the arc-node formulation, the spanning tree formulation, and the path formulation. We give a detailed analysis of the formulations of the VRPTW and a review of the literature related to the different formulations. There are two main lines of development in relation to the exact algorithms for the VRPTW. One is concerned with the general decomposition approach and the solution to certain dual problems associated with the VRPTW. Another more recent direction is concerned with the analysis of the polyhedral structure of the VRPTW. We conclude by examining possible future lines of research in the area of the VRPTW." ] }
1907.10360
2963382051
We consider multi-agent transport task problems where, e.g. in a factory setting, items have to be delivered from a given start to a goal pose while the delivering robots need to avoid collisions with each other on the floor. We introduce a Task Conflict-Based Search (TCBS) Algorithm to solve the combined delivery task allocation and multi-agent path planning problem optimally. The problem is known to be NP-hard and the optimal solver cannot scale. However, we introduce it as a baseline to evaluate the sub-optimality of other approaches. We show experimental results that compare our solver with different sub-optimal ones in terms of regret.
The field of @cite_27 is also combining two planning domains but for mobile manipulators. Our algorithm borrows the concept of hierarchical planners and the reaction to planning errors i.e. when no path is found.
{ "cite_N": [ "@cite_27" ], "mid": [ "2144349668", "2968156685", "2611332761", "2141088850" ], "abstract": [ "The planning problem is considered for a mobile manipulator system which must perform a sequence of tasks defined by position, orientation, force, and moment vectors at the end effector. Each task can be performed in multiple configurations due to the redundancy introduced by mobility. The planning problem is formulated as an optimization problem in which the decision variables for mobility (base position) are separated from the manipulator joint angles in the cost function. The resulting numerical problem is nonlinear with nonconvex, unconnected feasible regions in the decision space. Simulated annealing is proposed as a general solution method for obtaining near-optimal results. The problem formulation and numerical solution by simulated annealing are illustrated for a positioning system with five degrees of freedom. These results are compared with results obtained by conventional nonlinear programming techniques customized for the particular example system. >", "We present an algorithm that produces a plan for relocating obstacles in order to grasp a target in clutter by a robotic manipulator without collisions. We consider configurations where objects are densely populated in a constrained and confined space. Thus, there exists no collision-free path for the manipulator without relocating obstacles. Since the problem of planning for object rearrangement has shown to be NP-hard, it is difficult to perform manipulation tasks efficiently which could frequently happen in service domains (e.g., taking out a target from a shelf or a fridge).Our proposed planner employs a collision avoidance scheme which has been widely used in mobile robot navigation. The planner determines an obstacle to be removed quickly in real time. It also can deal with dynamic changes in the configuration (e.g., changes in object poses). Our method is shown to be complete and runs in polynomial time. Experimental results in a realistic simulated environment show that our method improves up to 31 of the execution time compared to other competitors.", "Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose LatPlan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), and a pair of images representing the initial and the goal states (planning inputs), LatPlan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. The contribution of this paper is twofold: (1) State Autoencoder, which finds a propositional state representation of the environment using a Variational Autoencoder. It generates a discrete latent vector from the images, based on which a PDDL model can be constructed and then solved by an off-the-shelf planner. (2) Action Autoencoder Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions effects), and provides a successor function for the implicit graph search. We evaluate LatPlan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and LightsOut.", "Over the years increasingly sophisticated planning algorithms have been developed. These have made for more efficient planners, but unfortunately these planners still suffer from combinatorial complexity even in simple domains. Theoretical results demonstrate that planning is in the worst case intractable. Nevertheless, planning in particular domains can often be made tractable by utilizing additional domain structure. In fact, it has long been acknowledged that domain independent planners need domain dependent information to help them plan effectively. In this work we present an approach for representing and utilizing domain specific control knowledge. In particular, we show how domain dependent search control knowledge can be represented in a temporal logic, and then utilized to effectively control a forward-chaining planner. There are a number of advantages to our approach, including a declarative semantics for the search control knowledge; a high degree of modularity (new search control knowledge can be added without affecting previous control knowledge); and an independence of this knowledge from the details of the planning algorithm. We have implemented our ideas in the TLPLAN system, and have been able to demonstrate its remarkable effectiveness in a wide range of planning domains." ] }
1907.10371
2963170514
Comments on social media are very diverse, in terms of content, style and vocabulary, which make generating comments much more challenging than other existing natural language generation (NLG) tasks. Besides, since different user has different expression habits, it is necessary to take the user's profile into consideration when generating comments. In this paper, we introduce the task of automatic generation of personalized comment (AGPC) for social media. Based on tens of thousands of users' real comments and corresponding user profiles on weibo, we propose Personalized Comment Generation Network (PCGN) for AGPC. The model utilizes user feature embedding with a gated memory and attends to user description to model personality of users. In addition, external user representation is taken into consideration during the decoding to enhance the comments generation. Experimental results show that our model can generate natural, human-like and personalized comments.
This paper focuses on comments generation task, which can be further divided into generating a comment according to the structure data @cite_1 , text data @cite_7 , image @cite_0 and video @cite_4 , separately.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_1", "@cite_7" ], "mid": [ "1985710361", "2890638727", "1843891098", "2341349540" ], "abstract": [ "Comments left by readers on Web documents contain valuable information that can be utilized in different information retrieval tasks including document search, visualization, and summarization. In this paper, we study the problem of comments-oriented document summarization and aim to summarize a Web document (e.g., a blog post) by considering not only its content, but also the comments left by its readers. We identify three relations (namely, topic, quotation, and mention) by which comments can be linked to one another, and model the relations in three graphs. The importance of each comment is then scored by: (i) graph-based method, where the three graphs are merged into a multi-relation graph; (ii) tensor-based method, where the three graphs are used to construct a 3rd-order tensor. To generate a comments-oriented summary, we extract sentences from the given Web document using either feature-biased approach or uniform-document approach. The former scores sentences to bias keywords derived from comments; while the latter scores sentences uniformly with comments. In our experiments using a set of blog posts with manually labeled sentences, our proposed summarization methods utilizing comments showed significant improvement over those not using comments. The methods using feature-biased sentence extraction approach were observed to outperform that using uniform-document approach.", "We introduce the task of automatic live commenting. Live commenting, which is also called video barrage', is an emerging feature on online video sites that allows real-time comments from viewers to fly across the screen like bullets or roll at the right side of the screen. The live comments are a mixture of opinions for the video and the chit chats with other comments. Automatic live commenting requires AI agents to comprehend the videos and interact with human viewers who also make the comments, so it is a good testbed of an AI agent's ability of dealing with both dynamic vision and language. In this work, we construct a large-scale live comment dataset with 2,361 videos and 895,929 live comments. Then, we introduce two neural models to generate live comments based on the visual and textual contexts, which achieve better performance than previous neural baselines such as the sequence-to-sequence model. Finally, we provide a retrieval-based evaluation protocol for automatic live commenting where the model is asked to sort a set of candidate comments based on the log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank. Putting it all together, we demonstrate the first LiveBot'.", "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.", "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines." ] }
1907.10451
2963188742
How to perform effective information fusion of different modalities is a core factor in boosting the performance of RGBT tracking. This paper presents a novel deep fusion algorithm based on the representations from an end-to-end trained convolutional neural network. To deploy the complementarity of features of all layers, we propose a recursive strategy to densely aggregate these features that yield robust representations of target objects in each modality. In different modalities, we propose to prune the densely aggregated features of all modalities in a collaborative way. In a specific, we employ the operations of global average pooling and weighted random selection to perform channel scoring and selection, which could remove redundant and noisy features to achieve more robust feature representation. Experimental results on two RGBT tracking benchmark datasets suggest that our tracker achieves clear state-of-the-art against other RGB and RGBT tracking methods.
RGBT tracking receives more and more attention in the computer vision community with the popularity of thermal infrared sensors. Recent methods on RGBT tracking mainly focus on sparse representation because of its capability of suppressing noise and errors @cite_35 , @cite_15 . Wu @cite_35 concatenate the image patches from RGB and thermal sources into a one-dimensional vector that is then sparsely represented in the target template space. Collaborative sparse representation based trackers is proposed by Li @cite_15 to jointly optimize the sparse codes and modality weights online for more reliable tracking. And Li @cite_37 further consider heterogeneous property between different modalities and noise effects of initial seeds in the cross-modal ranking model. These methods rely on handcrafted features to track objects, and thus are difficult to handle the challenges of significant appearance changes caused by background clutter, occlusion, deformation within each modality.
{ "cite_N": [ "@cite_35", "@cite_15", "@cite_37" ], "mid": [ "2896228140", "2577056945", "2901716381", "2775609985" ], "abstract": [ "Due to the complementary benefits of visible (RGB) and thermal infrared (T) data, RGB-T object tracking attracts more and more attention recently for boosting the performance under adverse illumination conditions. Existing RGB-T tracking methods usually localize a target object with a bounding box, in which the trackers or detectors is often affected by the inclusion of background clutter. To address this problem, this paper presents a novel approach to suppress background effects for RGB-T tracking. Our approach relies on a novel cross-modal manifold ranking algorithm. First, we integrate the soft cross-modality consistency into the ranking model which allows the sparse inconsistency to account for the different properties between these two modalities. Second, we propose an optimal query learning method to handle label noises of queries. In particular, we introduce an intermediate variable to represent the optimal labels, and formulate it as a (l_1 )-optimization based sparse learning problem. Moreover, we propose a single unified optimization algorithm to solve the proposed model with stable and efficient convergence behavior. Finally, the ranking results are incorporated into the patch-based object features to address the background effects, and the structured SVM is then adopted to perform RGB-T tracking. Extensive experiments suggest that the proposed approach performs well against the state-of-the-art methods on large-scale benchmark datasets.", "This paper studies the problem of object tracking in challenging scenarios by leveraging multimodal visual data. We propose a grayscale-thermal object tracking method in Bayesian filtering framework based on multitask Laplacian sparse representation. Given one bounding box, we extract a set of overlapping local patches within it, and pursue the multitask joint sparse representation for grayscale and thermal modalities. Then, the representation coefficients of the two modalities are concatenated into a vector to represent the feature of the bounding box. Moreover, the similarity between each patch pair is deployed to refine their representation coefficients in the sparse representation, which can be formulated as the Laplacian sparse representation. We also incorporate the modal reliability into the Laplacian sparse representation to achieve an adaptive fusion of different source data. Experiments on two grayscale-thermal datasets suggest that the proposed approach outperforms both grayscale and grayscale-thermal tracking approaches.", "This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGB-T tracking). We propose a novel deep network architecture \"quality-aware Feature Aggregation Network (FANet)\" to achieve quality-aware aggregations of both hierarchical features and multimodal information for robust online RGB-T tracking. Unlike existing works that directly concatenate hierarchical deep features, our FANet learns the layer weights to adaptively aggregate them to handle the challenge of significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion within each modality. Moreover, we employ the operations of max pooling, interpolation upsampling and convolution to transform these hierarchical and multi-resolution features into a uniform space at the same resolution for more effective feature aggregation. In different modalities, we elaborately design a multimodal aggregation sub-network to integrate all modalities collaboratively based on the predicted reliability degrees. Extensive experiments on large-scale benchmark datasets demonstrate that our FANet significantly outperforms other state-of-the-art RGB-T tracking methods.", "This paper investigates how to integrate the complementary information from RGB and thermal (RGB-T) sources for object tracking. We propose a novel Convolutional Neural Network (ConvNet) architecture, including a two-stream ConvNet and a FusionNet, to achieve adaptive fusion of different source data for robust RGB-T tracking. Both RGB and thermal streams extract generic semantic information of the target object. In particular, the thermal stream is pre-trained on the ImageNet dataset to encode rich semantic information, and then fine-tuned using thermal images to capture the specific properties of thermal information. For adaptive fusion of different modalities while avoiding redundant noises, the FusionNet is employed to select most discriminative feature maps from the outputs of the two-stream ConvNet, and updated online to adapt to appearance variations of the target object. Finally, the object locations are efficiently predicted by applying the multi-channel correlation filter on the fused feature maps. Extensive experiments on the recently public benchmark GTOT verify the effectiveness of the proposed approach against other state-of-the-art RGB-T trackers." ] }
1907.10451
2963188742
How to perform effective information fusion of different modalities is a core factor in boosting the performance of RGBT tracking. This paper presents a novel deep fusion algorithm based on the representations from an end-to-end trained convolutional neural network. To deploy the complementarity of features of all layers, we propose a recursive strategy to densely aggregate these features that yield robust representations of target objects in each modality. In different modalities, we propose to prune the densely aggregated features of all modalities in a collaborative way. In a specific, we employ the operations of global average pooling and weighted random selection to perform channel scoring and selection, which could remove redundant and noisy features to achieve more robust feature representation. Experimental results on two RGBT tracking benchmark datasets suggest that our tracker achieves clear state-of-the-art against other RGB and RGBT tracking methods.
Feature aggregation @cite_30 @cite_23 is becoming more and more popular to improve network performance by enhancing the representation of features. Without exception, in the field of visual tracking @cite_16 @cite_33 @cite_2 @cite_50 , there are many methods to improve tracking performance by the skill of feature aggregation. Li @cite_16 design a FusionNet to directly aggregate RGB and thermal feature maps from the outputs of two-stream ConvNet. A aggregation of handcrafted low-level and hierarchical deep features is proposed by Danelljan @cite_50 @cite_2 by employing an implicit interpolation model to pose the learning problem in the continuous spatial domain, which enable efficient integration of multi-resolution feature maps. Qi @cite_33 take full advantages of features from different CNN layers and used an adaptive Hedge method to hedge several CNN trackers into a stronger one. Li @cite_26 propose a new architecture to aggregate the middle to the deep layer features, which not only improves the accuracy but also reduces the model size. Different from these methods, we proposed a novel feature aggregation and pruning framework for RGBT tracking, which recursively aggregates all layer deep features while compressing feature channels.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_23", "@cite_2", "@cite_50", "@cite_16" ], "mid": [ "2901716381", "2258484932", "2798365843", "2775609985" ], "abstract": [ "This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGB-T tracking). We propose a novel deep network architecture \"quality-aware Feature Aggregation Network (FANet)\" to achieve quality-aware aggregations of both hierarchical features and multimodal information for robust online RGB-T tracking. Unlike existing works that directly concatenate hierarchical deep features, our FANet learns the layer weights to adaptively aggregate them to handle the challenge of significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion within each modality. Moreover, we employ the operations of max pooling, interpolation upsampling and convolution to transform these hierarchical and multi-resolution features into a uniform space at the same resolution for more effective feature aggregation. In different modalities, we elaborately design a multimodal aggregation sub-network to integrate all modalities collaboratively based on the predicted reliability degrees. Extensive experiments on large-scale benchmark datasets demonstrate that our FANet significantly outperforms other state-of-the-art RGB-T tracking methods.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.", "Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations. Such a filter bank is well structured, properly initialized and discriminatively learned through a novel asymmetric multi-stream architecture with convolutional filter supervision and a non-random layer initialization. Experimental results show that our approach achieves state-of-the-art on three publicly available fine-grained recognition datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and visualizations are provided to understand our approach.", "This paper investigates how to integrate the complementary information from RGB and thermal (RGB-T) sources for object tracking. We propose a novel Convolutional Neural Network (ConvNet) architecture, including a two-stream ConvNet and a FusionNet, to achieve adaptive fusion of different source data for robust RGB-T tracking. Both RGB and thermal streams extract generic semantic information of the target object. In particular, the thermal stream is pre-trained on the ImageNet dataset to encode rich semantic information, and then fine-tuned using thermal images to capture the specific properties of thermal information. For adaptive fusion of different modalities while avoiding redundant noises, the FusionNet is employed to select most discriminative feature maps from the outputs of the two-stream ConvNet, and updated online to adapt to appearance variations of the target object. Finally, the object locations are efficiently predicted by applying the multi-channel correlation filter on the fused feature maps. Extensive experiments on the recently public benchmark GTOT verify the effectiveness of the proposed approach against other state-of-the-art RGB-T trackers." ] }
1907.10484
2966674341
Abstract Audit logs serve as a critical component in enterprise business systems and are used for auditing, storing, and tracking changes made to the data. However, audit logs are vulnerable to a series of attacks enabling adversaries to tamper data and corresponding audit logs without getting detected. Among them, two well-known attacks are “the physical access attack,” which exploits root privileges, and “the remote vulnerability attack,” which compromises known vulnerabilities in database systems. In this paper, we present BlockAudit: a scalable and tamper-proof system that leverages the design properties of audit logs and security guarantees of blockchain to enable secure and trustworthy audit logs. Towards that, we construct the design schema of BlockAudit and outline its functional and operational procedures. We implement our design on a custom-built Practical Byzantine Fault Tolerance (PBFT) blockchain system and evaluate the performance in terms of latency, network size, payload size, and transaction rate. Our results show that conventional audit logs can seamlessly transition into BlockAudit to achieve higher security and defend against the known attacks on audit logs.
Blockchain and Audit Logs Combining blockchain and audit logs, Sutton and Samvi @cite_52 proposed a blockchain-based approach that stores the integrity proof digest to the Bitcoin blockchain. Bitcoin uses a proof-of-work (PoW) consensus protocol. As we show later in tab:ca , PoW suffers from low throughput and high confirmation time. In particular, Bitcoin has a maximum throughput of 3--7 transactions per second. Therefore, for audit log applications that have a high transaction generation rate, the concept provided in @cite_52 can be insufficient. Castaldo al @cite_56 proposed a logging system to facilitate the exchange of electronic health data across multiple countries in Europe. They created a centralized logging system that provides traceability through unforgeable log management using blockchain. Cucrull al @cite_7 proposed a system that uses blockchain to enhance the security of the immutable logs. Log integrity proofs are published in the blockchain providing non-repudiation security properties.
{ "cite_N": [ "@cite_52", "@cite_7", "@cite_56" ], "mid": [ "2520906649", "2671457145", "2763836263", "2536325433" ], "abstract": [ "Several applications require robust and tamper-proof logging systems, e.g. electronic voting or bank information systems. At Scytl we use a technology, called immutable logs, that we deploy in our electronic voting solutions. This technology ensures the integrity, authenticity and non-repudiation of the generated logs, thus in case of any event the auditors can use them to investigate the issue. As a security recommendation it is advisable to store and or replicate the information logged in a location where the logger has no writing or modification permissions. Otherwise, if the logger gets compromised, the data previously generated could be truncated or altered using the same private keys. This approach is costly and does not protect against collusion between the logger and the entities that hold the replicated data. In order to tackle these issues, in this article we present a proposal and implementation to immutabilize integrity proofs of the secure logs within the Bitcoin’s blockchain. Due to the properties of the proposal, the integrity of the immutabilized logs is guaranteed without performing log data replication and even in case the logger gets latterly compromised.", "We present Catena, an efficiently-verifiable Bitcoinwitnessing scheme. Catena enables any number of thin clients, such as mobile phones, to efficiently agree on a log of application-specific statements managed by an adversarial server. Catenaimplements a log as an OP_RETURN transaction chain andprevents forks in the log by leveraging Bitcoin's security againstdouble spends. Specifically, if a log server wants to equivocate ithas to double spend a Bitcoin transaction output. Thus, Catenalogs are as hard to fork as the Bitcoin blockchain: an adversarywithout a large fraction of the network's computational powercannot fork Bitcoin and thus cannot fork a Catena log either. However, different from previous Bitcoin-based work, Catenadecreases the bandwidth requirements of log auditors from 90GB to only tens of megabytes. More precisely, our clients onlyneed to download all Bitcoin block headers (currently less than35 MB) and a small, 600-byte proof for each statement in a block. We implement Catena in Java using the bitcoinj library and use itto extend CONIKS, a recent key transparency scheme, to witnessits public-key directory in the Bitcoin blockchain where it can beefficiently verified by auditors. We show that Catena can securemany systems today, such as public-key directories, Tor directoryservers and software transparency schemes.", "Privacy audit logs are used to capture the actions of participants in a data sharing environment in order for auditors to check compliance with privacy policies. However, collusion may occur between the auditors and participants to obfuscate actions that should be recorded in the audit logs. In this paper, we propose a Linked Data based method of utilizing blockchain technology to create tamper-proof audit logs that provide proof of log manipulation and non-repudiation. We also provide experimental validation of the scalability of our solution using an existing Linked Data privacy audit log model.", "Proof of Work (PoW) powered blockchains currently account for more than 90 of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions." ] }
1907.10588
2966220704
Crowdsourcing platforms enable companies to propose tasks to a large crowd of users. The workers receive a compensation for their work according to the serious of the tasks they managed to accomplish. The evaluation of the quality of responses obtained from the crowd remains one of the most important problems in this context. Several methods have been proposed to estimate the expertise level of crowd workers. We propose an innovative measure of expertise assuming that we possess a dataset with an objective comparison of the items concerned. Our method is based on the definition of four factors with the theory of belief functions. We compare our method to the Fagin distance on a dataset from a real experiment, where users have to assess the quality of some audio recordings. Then, we propose to fuse both the Fagin distance and our expertise measure.
The identification of experts on crowdsourcing platforms has been the subject of several recent studies. Two different types of approach have been used: the ones where no prior knowledge is available and the ones using questions whose correct answers are known in advance. These questions with their known values are called golden data'' The terminology of such data can be called golden record'', gold data'' or even gold standard'', learning data'' according to the use. . @cite_0 have been working under the no prior knowledge'' hypothesis and managed to calculate the degree of accuracy and precision, assuming that the majority is always right. They defined this degree using the distance of @cite_7 between the response and all the other workers' average answers.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2162815002", "2515572806", "2003497265", "2276146857" ], "abstract": [ "Crowdsourcing has recently become popular among machine learning researchers and social scientists as an effective way to collect large-scale experimental data from distributed workers. To extract useful information from the cheap but potentially unreliable answers to tasks, a key problem is to identify reliable workers as well as unambiguous tasks. Although for objective tasks that have one correct answer per task, previous works can estimate worker reliability and task clarity based on the single gold standard assumption, for tasks that are subjective and accept multiple reasonable answers that workers may be grouped into, a phenomenon called schools of thought, existing models cannot be trivially applied. In this work, we present a statistical model to estimate worker reliability and task clarity without resorting to the single gold standard assumption. This is instantiated by explicitly characterizing the grouping behavior to form schools of thought with a rank-1 factorization of a worker-task groupsize matrix. Instead of performing an intermediate inference step, which can be expensive and unstable, we present an algorithm to analytically compute the sizes of different groups. We perform extensive empirical studies on real data collected from Amazon Mechanical Turk. Our method discovers the schools of thought, shows reasonable estimation of worker reliability and task clarity, and is robust to hyperparameter changes. Furthermore, our estimated worker reliability can be used to improve the gold standard prediction for objective tasks.", "Crowdsourcing platforms enable to propose simple human intelligence tasks to a large number of participants who realise these tasks. The workers often receive a small amount of money or the platforms include some other incentive mechanisms, for example they can increase the workers reputation score, if they complete the tasks correctly. We address the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly. Knowing who are the reliable workers could improve the quality of knowledge one can extract from responses. As opposed to other works in the literature, we assume that participants can give partial or incomplete responses, in case they are not sure that their answers are correct. We model such partial or incomplete responses with the help of belief functions, and we derive a measure that characterizes the expertise level of each participant. This measure is based on precise and exactitude degrees that represent two parts of the expertise level. The precision degree reflects the reliability level of the participants and the exactitude degree reflects the knowledge level of the participants. We also analyze our model through simulation and demonstrate that our richer model can lead to more reliable identification of experts.", "The creation of golden standard datasets is a costly business. Optimally more than one judgment per document is obtained to ensure a high quality on annotations. In this context, we explore how much annotations from experts differ from each other, how different sets of annotations influence the ranking of systems and if these annotations can be obtained with a crowdsourcing approach. This study is applied to annotations of images with multiple concepts. A subset of the images employed in the latest ImageCLEF Photo Annotation competition was manually annotated by expert annotators and non-experts with Mechanical Turk. The inter-annotator agreement is computed at an image-based and concept-based level using majority vote, accuracy and kappa statistics. Further, the Kendall τ and Kolmogorov-Smirnov correlation test is used to compare the ranking of systems regarding different ground-truths and different evaluation measures in a benchmark scenario. Results show that while the agreement between experts and non-experts varies depending on the measure used, its influence on the ranked lists of the systems is rather small. To sum up, the majority vote applied to generate one annotation set out of several opinions, is able to filter noisy judgments of non-experts to some extent. The resulting annotation set is of comparable quality to the annotations of experts.", "Crowdsourcing has attracted significant attention from the database community in recent years and several crowdsourced databases have been proposed to incorporate human power into traditional database systems. One big issue in crowdsourcing is to achieve high quality because workers may return incorrect answers. A typical solution to address this problem is to assign each question to multiple workers and combine workers’ answers to generate the final result. One big challenge arising in this strategy is to infer worker’s quality. Existing methods usually assume each worker has a fixed quality and compute the quality using qualification tests or historical performance. However these methods cannot accurately estimate a worker’s quality. To address this problem, we propose a worker model and devise an incremental inference strategy to accurately compute the workers’ quality. We also propose a question model and develop two efficient strategies to combine the worker’s model to compute the question’s result. We implement our method and compare with existing inference approaches on real crowdsourcing platforms using real-world datasets, and the experiments indicate that our method achieves high accuracy and outperforms existing approaches." ] }
1907.10588
2966220704
Crowdsourcing platforms enable companies to propose tasks to a large crowd of users. The workers receive a compensation for their work according to the serious of the tasks they managed to accomplish. The evaluation of the quality of responses obtained from the crowd remains one of the most important problems in this context. Several methods have been proposed to estimate the expertise level of crowd workers. We propose an innovative measure of expertise assuming that we possess a dataset with an objective comparison of the items concerned. Our method is based on the definition of four factors with the theory of belief functions. We compare our method to the Fagin distance on a dataset from a real experiment, where users have to assess the quality of some audio recordings. Then, we propose to fuse both the Fagin distance and our expertise measure.
@cite_9 and @cite_1 also used this approach for binary classifications and categorical labeling. @cite_13 have generalized this technique on ordinary rankings (associating scores from 1 to 5 depending on the quality of an object or a service). These methods converge to calculate the sensitivity'' (the true positives) and the specificity'' (the true negatives) for each label. The worker is then labeled as a spammer when his score is closed to 0; A perfect expert would be assigned a score of 1. The algorithms described previously provide efficient methods to determine the quality of the workers' answers when the truth is unknown whereas in our case the theoretical correct grades attributed to the @math reference signals are known. We therefore seek to identify the experts based on correct baseline data and to define a level of expertise proportional to the similarity between worker's answers and known answers in advance. Thus, our work is based on golden data'' that are used to estimate the quality of workers in a direct way, as proposed by @cite_2 .
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_1", "@cite_2" ], "mid": [ "2122770142", "2963185791", "2126022166", "2604132367" ], "abstract": [ "Performance metrics for binary classification are designed to capture tradeoffs between four fundamental population quantities: true positives, false positives, true negatives and false negatives. Despite significant interest from theoretical and applied communities, little is known about either optimal classifiers or consistent algorithms for optimizing binary classification performance metrics beyond a few special cases. We consider a fairly large family of performance metrics given by ratios of linear combinations of the four fundamental population quantities. This family includes many well known binary classification metrics such as classification accuracy, AM measure, F-measure and the Jaccard similarity coefficient as special cases. Our analysis identifies the optimal classifiers as the sign of the thresholded conditional probability of the positive class, with a performance metric-dependent threshold. The optimal threshold can be constructed using simple plug-in estimators when the performance metric is a linear combination of the population quantities, but alternative techniques are required for the general case. We propose two algorithms for estimating the optimal classifiers, and prove their statistical consistency. Both algorithms are straightforward modifications of standard approaches to address the key challenge of optimal threshold selection, thus are simple to implement in practice. The first algorithm combines a plug-in estimate of the conditional probability of the positive class with optimal threshold selection. The second algorithm leverages recent work on calibrated asymmetric surrogate losses to construct candidate classifiers. We present empirical comparisons between these algorithms on benchmark datasets.", "Multiclass classification problems such as image annotation can involve a large number of classes. In this context, confusion between classes can occur, and single label classification may be misleading. We provide in the present paper a general device that, given an unlabeled dataset and a score function defined as the minimizer of some empirical and convex risk, outputs a set of class labels, instead of a single one. Interestingly, this procedure does not require that the unlabeled dataset explores the whole classes. Even more, the method is calibrated to control the expected size of the output set while minimizing the classification risk. We show the statistical optimality of the procedure and establish rates of convergence under the Tsybakov margin condition. It turns out that these rates are linear on the number of labels. We apply our methodology to convex aggregation of confidence sets based on the V-fold cross validation principle also known as the superlearning principle. We illustrate the numerical performance of the procedure on real data and demonstrate in particular that with moderate expected size, w.r.t. the number of labels, the procedure provides significant improvement of the classification risk.", "This paper studies two-class (or binary) classification of elements X in R k that allows for a reject option. Based on n independent copies of the pair of random variables (X,Y ) with X 2 R k and Y 2 0,1 , we consider classifiers f(X) that render three possible outputs: 0, 1 and R. The option R expresses doubt and is to be used for few observations that are hard to classify in an automatic way. Chow (1970) derived the optimal rule minimizing the risk P f(X) 6 Y, f(X) 6 R + dP f(X) = R . This risk function subsumes that the cost of making a wrong decision equals 1 and that of utilizing the reject option is d. We show that the classification problem hinges on the behavior of the regression function (x) = E(Y |X = x) near d and 1 d. (Here d 2 [0,1 2] as the other cases turn out to be trivial.) Classification rules can be categorized into plug-in estimators and empirical risk minimizers. Both types are considered here and we prove that the rates of convergence of the risk of any estimate depends on P | (X) d| + P | (X) (1 d)| and on the quality of the estimate for or an appropriate measure of the size of the class of classifiers, in case of plug-in rules and empirical risk minimizers, respectively. We extend the mathematical framework even further by dierentiating between costs associated with the two possible errors: predicting f(X) = 0 whilst Y = 1 and predicting f(X) = 1 whilst Y = 0. Such situations are common in, for instance, medical studies where misclassifying a sick patient as healthy is worse than the opposite.", "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010), and by Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training." ] }
1907.10468
2962887558
We revisit the complexity of deciding, given a bimatrix game, whether it has a Nash equilibrium with certain natural properties; such decision problems were early known to be @math -hard GZ89 . We show that @math -hardness still holds under two significant restrictions in simultaneity: the game is win-lose (that is, all utilities are @math or @math ) and symmetric . To address the former restriction, we design win-lose gadgets and a win-lose reduction; to accomodate the latter restriction, we employ and analyze the classical @math -symmetrization GHR63 in the win-lose setting. Thus, symmetric win-lose bimatrix games are as complex as general bimatrix games with respect to such decision problems. As a byproduct of our techniques, we derive hardness results for search, counting and parity problems about Nash equilibria in symmetric win-lose bimatrix games.
None of the works @cite_21 @cite_30 @cite_13 @cite_5 @cite_3 @cite_18 @cite_9 @cite_19 on the complexity of decision and counting problems about Nash equilibria in bimatrix games considered the two restrictions to win-lose bimatrix and symmetric bimatrix games in simultaneity; neither did the works @cite_0 @cite_7 @cite_27 @cite_23 on the complexity of the search problem. This work encompasses all of the decision problems, together with their counting and parity versions, in the common framework composed of the gadget games, the win-lose reduction and the win-lose @math -symmetrization. So, problem-specific reductions and techniques, such as the regular subgraphs technique from @cite_30 or the good assignments technique from @cite_13 , are not necessary.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_13" ], "mid": [ "2149624304", "2057913812", "2072113937", "2568433471" ], "abstract": [ "We investigate the complexity of finding Nash equilibria in which the strategy of each player is uniform on its support set. We show that, even for a restricted class of win-lose bimatrix games, deciding the existence of such uniform equilibria is an NP-complete problem. Our proof is graph-theoretical. Motivated by this result, we also give NP-completeness results for the problems of finding regular induced subgraphs of large size or regularity, which can be of independent interest.", "We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD-hard to compute.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria.", "A recent body of experimental literature has studied empirical game-theoretical analysis, in which we have partial knowledge of a game, consisting of observations of a subset of the pure-strategy profiles and their associated payoffs to players. The aim is to find an exact or approximate Nash equilibrium of the game, based on these observations. It is usually assumed that the strategy profiles may be chosen in an on-line manner by the algorithm. We study a corresponding computational learning model, and the query complexity of learning equilibria for various classes of games. We give basic results for exact equilibria of bimatrix and graphical games. We then study the query complexity of approximate equilibria in bimatrix games. Finally, we study the query complexity of exact equilibria in symmetric network congestion games. For directed acyclic networks, we can learn the cost functions (and hence compute an equilibrium) while querying just a small fraction of pure-strategy profiles. For the special case of parallel links, we have the stronger result that an equilibrium can be identified while only learning a small fraction of the cost values." ] }
1907.10398
2963225856
The median of a graph @math is the set of all vertices @math of @math minimizing the sum of distances from @math to all other vertices of @math . It is known that computing the median of dense graphs in subcubic time refutes the APSP conjecture and computing the median of sparse graphs in subquadratic time refutes the HS conjecture. In this paper, we present a linear time algorithm for computing medians of median graphs, improving over the existing quadratic time algorithm. Median graphs constitute the principal class of graphs investigated in metric graph theory, due to their bijections with other discrete and geometric structures (CAT(0) cube complexes, domains of event structures, and solution sets of 2-SAT formulas). Our algorithm is based on the known majority rule characterization of medians in a median graph @math and on a fast computation of parallelism classes of edges ( @math -classes) of @math . The main technical contribution of the paper is a linear time algorithm for computing the @math -classes of a median graph @math using Lexicographic Breadth First Search (LexBFS). Namely, we show that any LexBFS ordering of the vertices of a median graph @math has the following : the fathers of any two adjacent vertices of @math are also adjacent. Using the fast computation of the @math -classes of a median graph @math , we also compute the Wiener index (total distance) of @math in linear time.
As noticed above, the @math -classes of a median graph @math correspond to coordinates of the hypercube in which @math isometrically embeds. Thus one can define @math -classes for all partial cubes. Eppstein @cite_29 performed an efficient computation of @math -classes as a main step of his @math algorithm for recognizing partial cubes. For this, he runs several Breadth First Searches (BFS) on the input graph. The computation of @math -classes of a median graph in @math time by , @cite_22 was used in their subquadratic recognition of median graphs. The fellow-traveler property (which is essential in our computation of @math -classes) is a notion coming from geometric group theory @cite_9 and is one of the principal tool used to prove the biautomaticity of a group. In a slightly stronger form it allows to establish dismantlability of graphs (see, for example, @cite_49 @cite_3 and references therein for classes of graphs in which such fellow traveler order can be obtained by BFS or LexBFS).
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_29", "@cite_3", "@cite_49" ], "mid": [ "1994784589", "2073943982", "2092349993", "2094731111" ], "abstract": [ "A median of a family of vertices in a graph is any vertex whose distance-sum to that family is minimum. In the framework of metric spaces the problem of minimizing a distance-sum is often referred to as the Fermat problem. On the other hand, medians have been studied from a purely order-theoretic or combinatorial point of view (for instance, in statistics, or in Jordan’s work [12] on trees). The aim of this paper is to investigate the mutual relationship of the metric and the ordinal combinatorial approaches to the median problem in the class of median graphs. A connected graph is a median graph if any three vertices admit a unique median (see Avann [l]). Note that trees and the covering graphs of distributive lattices are median graphs. Very little is known about medians in arbitrary graphs (cf. Slater [20]); so far, only trees (Zelinka [22], and many others) and the covering graphs of distributive lattices (Barbut [4]) have been considered. In both cases we get that (i) the medians of any family form an interval (a path in a tree, an order-theoretic interval in a distributive lattice), and (ii) medians of odd numbered families are unique (see Slater [19] for trees, and Barbut [4] for distributive lattices). These results point to the fact that (i) and (ii) must be true for any median graph. After recalling some basic definitions and facts concerning median graphs and median semilattices (for further information, see Bandelt and Hedlikova [3]), we establish (i) and (ii) for arbitrary median graphs. Our results are based on theorems of Avann, Sholander, and Barbut. In trees medians have nice local properties (cf. [7]). Indeed, median sets are related to mass centers (Zelinka [22]) and security centers (Slater [18]). In Section 3 this is extended to median graphs. The study of medians applies to social choice theory (see Barbut 151, and Barthelemy and Monjardet [8]). The median procedure is strongly related to the simple majority rule: the median of a family (A,, . . . , Azk+ ,) of subsets of a set X may be written as U n Ai (Barbut’s formula).", "Abstract Motivated by a dynamic location problem for graphs, Chung, Graham and Saks introduced a graph parameter called windex. Graphs of windex 2 turned out to be, in graph-theoretic language, retracts of hypercubes. These graphs are also known as median graphs and can be characterized as partial binary Hamming graphs satisfying a convexity condition. In this paper an O(n 3 2 log n) algorithm is presented to recognize these graphs. As a by-product we are also able to isometrically embed median graphs in hypercubes in O ( m log n ) time.", "Chapter 36 Decomposing Graphs into Regions of Small Diameter* Nathan Linialt Michael .lss A decomposition of a graph G = (V, E) is a partition of the vertex set into subsets (called lhks). The diameter of a decomposition is the least. d such that any two vertices belonging to the same connected component of a block are at distance < d. In this paper we prove (nearly best possible) statements of the form: .4ny n–vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [AGLP1 it was shown that every graph has a decomposition into at most s(n) blocks of diameter at most s(n) for s(n) = o( loglog d h n). usinga,technique of Awerbuch [A] and Awerbuch and Peleg [AP], we improve this result by showing that every graph has a decomposition of diameter ()(log n) into O(log n) blocks. In addition, we give a randomized distributed algorithm that produces such a decomposition and runs in time 0(log2 n). The construction can be parametrized to provide decompositions that trade-off between the number of blocks and the diameter. We show that this trade-off is nearly best possible for two families of graphs the first consists of skeletons of certain triangulations of a simplex and the second consists of grid graphs with added diagonals. The proofs in both cases rely on basic results in combinatorial topology, Sperner’s lemma for the first class and Tucker’s lemma for the second. *This work was supported in part by NSF contracts DMS87-03541 and CCR-8911388 tDepartment of Computer Science, Hebrew University, Jerusalem, Israel. + ‘Department of Computer Science and Engineering, Mail Code C-014, University of California, San Diego, La Jolla, CA 92093-0114.", "In this note, we characterize the graphs (1-skeletons) of some piecewise Euclidean simplicial and cubical complexes having nonpositive curvature in the sense of Gromov's CAT(0) inequality. Each such cell complex K is simply connected and obeys a certain flag condition. It turns out that if, in addition, all maximal cells are either regular Euclidean cubes or right Euclidean triangles glued in a special way, then the underlying graph G(K) is either a median graph or a hereditary modular graph without two forbidden induced subgraphs. We also characterize the simplicial complexes arising from bridged graphs, a class of graphs whose metric enjoys one of the basic properties of CAT(0) spaces. Additionally, we show that the graphs of all these complexes and some more general classes of graphs have geodesic combings and bicombings verifying the 1- or 2-fellow traveler property." ] }
1907.10406
2963805462
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
In this work, we aim to identify the internal DNN architecture @cite_6 @cite_7 . In the real-world applications, several DNN architectures are widely used. For example, AlexNet @cite_0 is popular for its success in the 2012 ImageNet competition @cite_35 . GoogleNet @cite_9 significantly increases the depth of DNN. ResNet @cite_14 beats human experts in image recognition. VGGNet @cite_23 and RCNN @cite_31 are widely used for their breakthrough in object detection. There are also networks specific to mobile applications @cite_18 @cite_30 . Currently, most engineers design their AI product based on the exist architectures. Therefore, by identifying the existent popular architectures, we expect to be able to break a large portion of such AI products.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_14", "@cite_7", "@cite_9", "@cite_6", "@cite_0", "@cite_23", "@cite_31" ], "mid": [ "2279098554", "2964081807", "2792156255", "2605370493" ], "abstract": [ "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "Abstract Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods.", "Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7 of AlexNet or 59 of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets." ] }
1907.10406
2963805462
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
Side-channel attack (SCA) is a very powerful tool in attacking encrypted systems. Traditionally, the encryption process is considered as a perfect black-box. However, in real-world applications, information can be leaking @cite_38 . Initially, SCA is focused on differential power analysis @cite_4 and timing attacks @cite_37 . Later, more side-channel information and attacking methods are developed. @cite_33 propose a cache based SCA to extract the private encryption keys. @cite_17 extract full 4096-bit RSA keys successfully using the computer audio information during the decryption process. By cloning the USIM cards, @cite_15 can recover the encryption key and other information contained from the 3G 4G USIM cards. Defense against SCA is also well studied @cite_8 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_4", "@cite_33", "@cite_8", "@cite_15", "@cite_17" ], "mid": [ "2900861686", "49522230", "2014227559", "2172060328" ], "abstract": [ "This paper demonstrates the improved power and electromagnetic (EM) side-channel attack (SCA) resistance of 128-bit Advanced Encryption Standard (AES) engines in 130-nm CMOS using random fast voltage dithering (RFVD) enabled by integrated voltage regulator (IVR) with the bond-wire inductors and an on-chip all-digital clock modulation (ADCM) circuit. RFVD scheme transforms the current signatures with random variations in AES input supply while adding random shifts in the clock edges in the presence of global and local supply noises. The measured power signatures at the supply node of the AES engines show upto 37 @math reduction in peak for higher order test vector leakage assessment (TVLA) metric and upto 692 @math increase in minimum traces required to disclose (MTD) the secret encryption key with correlation power analysis (CPA). Similarly, SCA on the measured EM signatures from the chip demonstrates a reduction of upto 11.3 @math in TVLA peak and upto 37 @math increase in correlation EM analysis (CEMA) MTD.", "Side-channel attacks are usually performed by employing the \"divide-and-conquer\" approach, meaning that leaking information is collected in a divide step, and later on exploited in the conquer step. The idea is to extract as much information as possible during the divide step, and to exploit the gathered information as efficiently as possible within the conquer step. Focusing on both of these steps, we discuss potential enhancements of Bernstein's cache-timing attack against the Advanced Encryption Standard (AES). Concerning the divide part, we analyze the impact of attacking different key-chunk sizes, aiming at the extraction of more information from the overall encryption time. Furthermore, we analyze the most recent improvement of time-driven cache attacks, presented by Aly and ElGayyar, according to its applicability on ARM Cortex-A platforms. For the conquer part, we employ the optimal key-enumeration algorithm as proposed by Veyrat-Charvillonaet al to significantly reduce the complexity of the exhaustive key-search phase compared to the currently employed threshold-based approach. This in turn leads to more practical attacks. Additionally, we provide extensive experimental results of the proposed enhancements on two Android-based smartphones, namely a Google Nexus S and a Samsung Galaxy SII.", "Side-channel attacks (SCAs), such as differential power analysis or differential electromagnetic analysis, pose a serious threat to the security of embedded systems. In the literature, few articles address the problem of securing general purpose processors (GPPs) with resourceful countermeasures. However, in many low-cost applications, where security is not critical, cryptographic algorithms are typically implemented in software. Since it has been proved that GPPs are vulnerable to SCAs, it is desirable to develop efficient mechanisms to ensure a certain level of security. In this paper, we extend side-channel countermeasures to the register transfer level description. The challenge is to create a new class of processor that executes embedded software applications, which are intrinsically protected against SCAs. For that purpose, we first investigate how to integrate into the datapath two countermeasures based on masking and hiding approaches. Through an FPGA-based processor, we then evaluate the overhead and the effectiveness of the proposed solutions against time-domain first-order attacks. We finally show that a suitable combination of countermeasures significantly increases the side-channel resistance in a cost-effective way.", "Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process." ] }
1907.10406
2963805462
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable, i.e., they are black-box. Therefore, in this work, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) we are the first to use side-channel information to reveal internal network architecture in embedded devices; (2) we are the first to construct models for internal parameter estimation; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50 accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI applications, and further propose corresponding defensive strategies in the future.
Naturally, this powerful attacking method can be applied to reveal DNN architectures or some related information. @cite_51 used timing side channels to infer the depth of the network; @cite_2 show that side channel attacks can roughly obtain information on activation functions, number of network layers, number of neurons, number of output categories, and weights in the neural network; Another close work is to obtain the input image by analyzing the power trace in the first convolution layer @cite_24 . So far, the current work is the first attempt trying to reveal the internal DNN architectures of embedded devices using power SCA.
{ "cite_N": [ "@cite_24", "@cite_51", "@cite_2" ], "mid": [ "2962939807", "2775079417", "2020676607", "2531448500" ], "abstract": [ "Abstract Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https: github.com AcrossV Gated-XNOR . More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms.", "Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.", "Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.", "The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks." ] }
1907.10343
2963426884
Conventional object detection methods essentially suppose that the training and testing data are collected from a restricted target domain with expensive labeling cost. For alleviating the problem of domain dependency and cumbersome labeling, this paper proposes to detect objects in an unrestricted environment by leveraging domain knowledge trained from an auxiliary source domain with sufficient labels. Specifically, we propose a multi-adversarial Faster-RCNN (MAF) framework for unrestricted object detection, which inherently addresses domain disparity minimization for domain adaptation in feature representation. The paper merits are in three-fold: 1) With the idea that object detectors often becomes domain incompatible when image distribution resulted domain disparity appears, we propose a hierarchical domain feature alignment module, in which multiple adversarial domain classifier submodules for layer-wise domain feature confusion are designed; 2) An information invariant scale reduction module (SRM) for hierarchical feature map resizing is proposed for promoting the training efficiency of adversarial domain adaptation; 3) In order to improve the domain adaptability, the aggregated proposal features with detection results are feed into a proposed weighted gradient reversal layer (WGRL) for characterizing hard confused domain samples. We evaluate our MAF on unrestricted tasks, including Cityscapes, KITTI, Sim10k, etc. and the experiments show the state-of-the-art performance over the existing detectors.
The object detection is a basic task in computer vision and has been widely studied for many years. The earlier work @cite_34 @cite_10 @cite_33 of the object detection were implemented with sliding windows and boost classifiers. Benefited by the success of CNN models @cite_8 @cite_3 @cite_0 , a number of CNN based object detection methods @cite_20 @cite_28 @cite_15 @cite_31 @cite_2 @cite_16 have been emerged. The region of interest (ROI) based two-stage object detection methods attracted a lot of attentions in recent years. R-CNN @cite_24 is the first two-stage detector which classifies the ROIs to find the objects. Girshick al @cite_29 further proposed a Fast-RCNN with ROI pooling layer that shares the convolution features, and both the detection speed and accuracy are promoted. After that, Faster-RCNN @cite_41 was introduced by Ren al, which integrate the Fast-RCNN and Region Proposal Network (RPN) together in an advanced structure. Faster-RCNN further improve the speed and accuracy of detection. In this paper, by taking the Faster-RCNN as backbone, we take into account the mind of domain transfer adaptation for exploring unrestricted object detection task across different domains.
{ "cite_N": [ "@cite_31", "@cite_33", "@cite_8", "@cite_28", "@cite_41", "@cite_29", "@cite_3", "@cite_16", "@cite_0", "@cite_24", "@cite_2", "@cite_15", "@cite_34", "@cite_10", "@cite_20" ], "mid": [ "2963418361", "2339367607", "2610420510", "2589615404" ], "abstract": [ "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.", "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework and its fast versions. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Region-proposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals; in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter- and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of-the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.", "Many modern approaches for object detection are two-staged pipelines. The first stage identifies regions of interest which are then classified in the second stage. Faster R-CNN is such an approach for object detection which combines both stages into a single pipeline. In this paper we apply Faster R-CNN to the task of company logo detection. Motivated by its weak performance on small object instances, we examine in detail both the proposal and the classification stage with respect to a wide range of object sizes. We investigate the influence of feature map resolution on the performance of those stages. Based on theoretical considerations, we introduce an improved scheme for generating anchor proposals and propose a modification to Faster R-CNN which leverages higher-resolution feature maps for small objects. We evaluate our approach on the FlickrLogos dataset improving the RPN performance from 0.52 to 0.71 (MABO) and the detection performance from 0.52 to @math (mAP).", "Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark." ] }
1901.10760
2919401041
This paper presents a novel clustering concept that is based on jointly learned nonlinear transforms (NTs) with priors on the information loss and the discrimination. We introduce a clustering principle that is based on evaluation of a parametric min-max measure for the discriminative prior. The decomposition of the prior measure allows to break down the assignment into two steps. In the first step, we apply NTs to a data point in order to produce candidate NT representations. In the second step, we preform the actual assignment by evaluating the parametric measure over the candidate NT representations. Numerical experiments on image clustering task validate the potential of the proposed approach. The evaluation shows advantages in comparison to the state-of-the-art clustering methods.
Factor analysis @cite_7 and matrix factorization @cite_8 relay on decomposition on hidden features without or with constraints. One special case with only a constraint on the sparsity of the hidden representation, which is considered as a "hard" assignment is the basic k-means @cite_13 algorithm. When discrimination constraints are present, they act as regularization, which were mainly defined using labels in the discriminative dictionary learning methods @cite_36 , @cite_21 and @cite_26 . Intended to capture the nonlinear structure of the data with outliers and noise, the kernel k-means algorithms @cite_22 and @cite_5 have been proposed. Also, many subspace clustering methods were proposed @cite_1 , @cite_12 , @cite_6 , @cite_41 , @cite_23 and @cite_34 . Commonly they consist of (i) subspace learning via matrix factorization and (ii) grouping of the data into clusters in the learned subspace. Some authors @cite_37 even include a graph regularization into the subspace clustering.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_36", "@cite_41", "@cite_21", "@cite_1", "@cite_6", "@cite_23", "@cite_5", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "603167732", "2337671281", "2947175317", "2056935845" ], "abstract": [ "Subspace learning techniques have been extensively used for dimensionality reduction (DR) in many pattern classification problem domains. Recently, methods like Subclass Discriminant Analysis (SDA) and Clustering-based Discriminant Analysis (CDA), which use subclass information for the discrimination between the data classes, have attracted much attention. In parallel, important work has been accomplished on Graph Embedding (GE), which is a general framework unifying several subspace learning techniques. In this paper, GE has been extended in order to integrate subclass discriminant information resulting to the novel Subclass Graph Embedding (SGE) framework, which is the main contribution of our work. It is proven that SGE encapsulates a diversity of both supervised and unsupervised unimodal methods like Locality Preserving Projections (LPP), Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The theoretical link of SDA and CDA methods with SGE is also established. Along these lines, it is shown that SGE comprises a generalization of the typical GE framework including subclass DR methods. Moreover, it allows for an easy utilization of kernels for confronting non-linearly separable data. Employing SGE, in this paper a novel DR algorithm, which uses subclass discriminant information, called Subclass Marginal Fisher Analysis (SMFA) has been proposed. Through a series of experiments on various real-world datasets, it is shown that SMFA outperforms in most of the cases the state-of-the-art demonstrating the efficacy and power of SGE as a platform to develop new methods. HighlightsGraph Embedding is extended in order to integrate subclass informationThe novel Subclass Graph Embedding framework is proposed.The kernelized version of the new framework is presentedSubclass Graph Embedding encapsulates various subspace learning methods.A novel Subclass Marginal Fisher Analysis method is proposed.", "Abstract This paper mainly focuses on dimensional reduction of fused dataset of holistic and geometrical face features vectors by solving singularity problem of linear discriminant analysis and maximizing the Fisher ratio in nonlinear subspace region with the preservation of local discriminative features. The combinational feature vector space is projected into low dimensional subspace using proposed Kernel Locality Preserving Symmetrical Weighted Fisher Discriminant Analysis (KLSWFDA) method. Matching score level fusion technique has been applied on projected subspace and combinational entire Gabor subspace is framed. Euclidean distance metric (L2) and support vector machine (SVM) classifier has been implemented to recognize and classify the expressions. Performance of proposed approach is evaluated and compared with state of art approaches. Experimental results on JAFFE, YALE and FD expression database demonstrate the effectiveness of the proposed approach.", "Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a fair'' subspace. We apply this method to densest subgraph and @math -means. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of @math is NP-hard and we show a polynomial time algorithm with a matching approximation bound. We further apply our method to @math -means. In a previous paper, [NIPS 2017] showed that problems such as @math -means can be approximated up to a constant factor while ensuring that none of two protected class (e.g., gender, ethnicity) is disparately impacted. We show that fair projections generalize the concept of fairlet introduced by to any number of protected attributes and improve empirically the quality of the resulting clustering. We also present the first constant-factor approximation for an arbitrary number of protected attributes thus settling an open problem recently addressed in several works.", "To uncover an appropriate latent subspace for data representation, in this paper we propose a novel Robust Structured Subspace Learning (RSSL) algorithm by integrating image understanding and feature learning into a joint learning framework. The learned subspace is adopted as an intermediate space to reduce the semantic gap between the low-level visual features and the high-level semantics. To guarantee the subspace to be compact and discriminative, the intrinsic geometric structure of data, and the local and global structural consistencies over labels are exploited simultaneously in the proposed algorithm. Besides, we adopt the @math -norm for the formulations of loss function and regularization respectively to make our algorithm robust to the outliers and noise. An efficient algorithm is designed to solve the proposed optimization problem. It is noted that the proposed framework is a general one which can leverage several well-known algorithms as special cases and elucidate their intrinsic relationships. To validate the effectiveness of the proposed method, extensive experiments are conducted on diversity datasets for different image understanding tasks, i.e., image tagging, clustering, and classification, and the more encouraging results are achieved compared with some state-of-the-art approaches." ] }
1901.10710
2950181282
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
As a subproblem of information retrieval, web search has long been an active research area, and has entailed a large body of literature. Methods in this area can be roughly grouped into two categories, namely traditional approaches and deep learning based models. Representative methods falling into the former category include LSA @cite_4 , pLSA @cite_14 , topic models such as LDA @cite_5 , Bi-Lingual Topic Models @cite_12 , etc. Methods belonging to the latter category are designed to extract semantic information via deep learning architectures, including auto-encoders @cite_1 , Siamese networks @cite_17 @cite_3 @cite_26 @cite_24 @cite_28 @cite_27 @cite_7 @cite_18 , interaction-based networks @cite_28 @cite_11 @cite_23 @cite_21 @cite_19 , as well as lexical and semantic matching networks @cite_20 @cite_6 , etc. Readers may refer to @cite_8 for a tutorial on this topic.
{ "cite_N": [ "@cite_3", "@cite_5", "@cite_20", "@cite_18", "@cite_4", "@cite_8", "@cite_21", "@cite_23", "@cite_17", "@cite_26", "@cite_7", "@cite_28", "@cite_6", "@cite_19", "@cite_27", "@cite_12", "@cite_14", "@cite_1", "@cite_24", "@cite_11" ], "mid": [ "2233653089", "2109154616", "2042980227", "1493108551" ], "abstract": [ "We describe a legal question answering system which combines legal information retrieval and textual entailment. We have evaluated our system using the data from the first competition on legal information extraction entailment (COLIEE) 2014. The competition focuses on two aspects of legal information processing related to answering yes no questions from Japanese legal bar exams. The shared task consists of two phases: legal ad hoc information retrieval and textual entailment. The first phase requires the identification of Japan civil law articles relevant to a legal bar exam query. We have implemented two unsupervised baseline models (tf-idf and Latent Dirichlet Allocation (LDA)-based Information Retrieval (IR)), and a supervised model, Ranking SVM, for the task. The features of the model are a set of words, and scores of an article based on the corresponding baseline models. The results show that the Ranking SVM model nearly doubles the Mean Average Precision compared with both baseline models. The second phase is to answer “Yes” or “No” to previously unseen queries, by comparing the meanings of queries with relevant articles. The features used for phase two are syntactic semantic similarities and identification of negation antonym relations. The results show that our method, combined with rule-based model and the unsupervised model, outperforms the SVM-based supervised model.", "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.", "Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency.", "This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection." ] }
1901.10710
2950181282
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
In particular, the authors of @cite_3 propose CDSSM by extending DSSM @cite_17 . Their CDSSM employs a convolutional layer as a sliding window based local feature extractor, and uses dimension-wise max-pooling to form the global semantic representation by combining all local features. Both DSSM and CDSSM are Siamese networks. By contrast, Deep Crossing @cite_9 separately handles each input feature, and introduces early crossings between all features immediately after the embedding layer. Albeit achieving superior performance, the input interactions of Deep Crossing prohibits it to be deployed as fast sponsored search algorithms.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_17" ], "mid": [ "2798791651", "2963308316", "2407905223", "2787941778" ], "abstract": [ "Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.", "In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4 by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5 absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20 relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications.", "The recent surge of intelligent personal assistants motivates spoken language understanding of dialogue systems. However, the domain constraint along with the inflexible intent schema remains a big issue. This paper focuses on the task of intent expansion, which helps remove the domain limit and make an intent schema flexible. A con-volutional deep structured semantic model (CDSSM) is applied to jointly learn the representations for human intents and associated utterances. Then it can flexibly generate new intent embeddings without the need of training samples and model-retraining, which bridges the semantic relation between seen and unseen intents and further performs more robust results. Experiments show that CDSSM is capable of performing zero-shot learning effectively, e.g. generating embeddings of previously unseen intents, and therefore expand to new intents without re-training, and outperforms other semantic embeddings. The discussion and analysis of experiments provide a future direction for reducing human effort about annotating data and removing the domain constraint in spoken dialogue systems.", "Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similarity-learning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC SiamFC allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks." ] }
1901.10710
2950181282
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
Attention mechanisms have gained great popularity recently @cite_2 @cite_0 , which allows modeling of dependencies without regard to distance in the sequence. In particular, self-attention refers to an attention mechanism that relates different positions in a single sequence when extracting sequence representation, and has found applications in a wide spectrum of tasks including reading comprehension, abstractive summarization, sentence representation, etc. As milestone work, the authors of @cite_10 propose Transformer, which is constructed solely on attention mechanisms and dispenses entirely with recurrence and convolutions. The self-attention structure used in our ACDSSM is adapted from @cite_13 , which proposes to represent sentence embedding by a 2-D matrix, with each row of the matrix attending on a different part of tokens within the sentence. In ACDSSM, all the component ad fields are deemed as tokens in a sequence, and the self-attention structure is adapted to attend on different part of ad fields.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_13", "@cite_2" ], "mid": [ "2724346673", "2789541106", "2951464442", "2962995178" ], "abstract": [ "The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20 for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.", "Relying entirely on an attention mechanism, the Transformer introduced by (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs.", "Recent work has shown that the encoder-decoder attention mechanisms in neural machine translation (NMT) are different from the word alignment in statistical machine translation. In this paper, we focus on analyzing encoder-decoder attention mechanisms, in the case of word sense disambiguation (WSD) in NMT models. We hypothesize that attention mechanisms pay more attention to context tokens when translating ambiguous words. We explore the attention distribution patterns when translating ambiguous nouns. Counter-intuitively, we find that attention mechanisms are likely to distribute more attention to the ambiguous noun itself rather than context tokens, in comparison to other nouns. We conclude that attention mechanism is not the main mechanism used by NMT models to incorporate contextual information for WSD. The experimental results suggest that NMT models learn to encode contextual information necessary for WSD in the encoder hidden states. For the attention mechanism in Transformer models, we reveal that the first few layers gradually learn to \"align\" source and target tokens and the last few layers learn to extract features from the related but unaligned context tokens.", "Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension. Often there exist features that are locally translation invariant and would be valuable for directing the model’s attention, but previous attentional architectures are not constructed to learn such features specifically. We introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way. We apply this architecture to the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries. Using those features, the model sequentially generates a summary by marginalizing over two attention mechanisms: one that predicts the next summary token based on the attention weights of the input tokens and another that is able to copy a code token as-is directly into the summary. We demonstrate our convolutional attention neural network’s performance on 10 popular Java projects showing that it achieves better performance compared to previous attentional mechanisms." ] }
1901.10710
2950181282
This paper proposes a novel training scheme for fast matching models in Search Ads, which is motivated by the real challenges in model training. The first challenge stems from the pursuit of high throughput, which prohibits the deployment of inseparable architectures, and hence greatly limits the model accuracy. The second problem arises from the heavy dependency on human provided labels, which are expensive and time-consuming to collect, yet how to leverage unlabeled search log data is rarely studied. The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data. Specifically, we first construct multiple auxiliary tasks from the enumerated relevance labels, and train the annotators by jointly learning from those related tasks. The annotation models are then used to assign scores to both labeled and unlabeled training samples. The deployable model is firstly learnt on the scored unlabeled data, and then fine-tuned on scored labeled data, by leveraging both labels and scores via minimizing the proposed label-aware weighted loss. According to our experiments, compared with the baseline that directly learns from relevance labels, training by the proposed framework outperforms it by a large margin, and improves data efficiency substantially by dispensing with 80 labeled samples. The proposed framework allows us to improve the fast matching model by learning from stronger annotators while keeping its architecture unchanged. Meanwhile, our training framework offers a principled manner to leverage search log data in the training phase, which could effectively alleviate our dependency on human provided labels.
Finally, the idea of augmenting training data using unsupervised or weakly supervised auxiliary information has been adopted to help many tasks across different areas. One recent study implementing this idea is reported in @cite_15 , which investigates the potential of pretraining on hashtags of social media images instead of on ImageNet. Their experiments give strong evidence on the benefits of exploring weakly labeled data. In this paper, we attempt to leverage weakly supervised data as augmenting features to help reduce ambiguities in extracting semantic representation from scattered, short textual input, in a simple yet effective manner. Our experiments show that such augmenting features could be conveniently leveraged to improve existing retrieval algorithms as well.
{ "cite_N": [ "@cite_15" ], "mid": [ "2610935556", "2952305675", "1705056854", "2029731618" ], "abstract": [ "Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available. Hence, in this paper, we propose to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources (e.g., click data). To this aim, we use the output of an unsupervised ranking model, such as BM25, as a weak supervision signal. We further train a set of simple yet effective ranking models based on feed-forward neural networks. We study their effectiveness under various learning scenarios (point-wise and pair-wise models) and using different input representations (i.e., from encoding query-document pairs into dense sparse vectors to using word embedding representation). We train our networks using tens of millions of training instances and evaluate it on two standard collections: a homogeneous news collection (Robust) and a heterogeneous large-scale web collection (ClueWeb). Our experiments indicate that employing proper objective functions and letting the networks to learn the input representation based on weakly supervised data leads to impressive performance, with over 13 and 35 MAP improvements over the BM25 model on the Robust and the ClueWeb collections. Our findings also suggest that supervised neural ranking models can greatly benefit from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models.", "Deep learning has revolutionized the performance of classification, but meanwhile demands sufficient labeled data for training. Given insufficient data, while many techniques have been developed to help combat overfitting, the challenge remains if one tries to train deep networks, especially in the ill-posed extremely low data regimes: only a small set of labeled data are available, and nothing -- including unlabeled data -- else. Such regimes arise from practical situations where not only data labeling but also data collection itself is expensive. We propose a deep adversarial data augmentation (DADA) technique to address the problem, in which we elaborately formulate data augmentation as a problem of training a class-conditional and supervised generative adversarial network (GAN). Specifically, a new discriminator loss is proposed to fit the goal of data augmentation, through which both real and augmented samples are enforced to contribute to and be consistent in finding the decision boundaries. Tailored training techniques are developed accordingly. To quantitatively validate its effectiveness, we first perform extensive simulations to show that DADA substantially outperforms both traditional data augmentation and a few GAN-based options. We then extend experiments to three real-world small labeled datasets where existing data augmentation and or transfer learning strategies are either less effective or infeasible. All results endorse the superior capability of DADA in enhancing the generalization ability of deep networks trained in practical extremely low data regimes. Source code is available at this https URL.", "Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10.", "We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed." ] }
1901.10401
2914836496
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
The proposed network architecture is an example of random mobile ad hoc network or device-to-device network in the sense that it can expand the limited coverage of infrastructure or enable high-speed and low-distance communication between devices without infrastructure @cite_14 @cite_43 @cite_38 @cite_36 @cite_20 @cite_40 . The performance of these networks has been studied extensively, with some studies using stochastic geometry to model the random locations of network components @cite_4 @cite_13 @cite_2 @cite_42 . For instance, the homogeneous planar Poisson point process has been widely used for its analytical tractability @cite_29 @cite_2 . Specifically, under the Palm distribution of the Poisson point process, the distribution of the signal-to-interference-plus-noise ratio (SINR) of a typical user and the network area spectral efficiency were derived in @cite_13 @cite_15 @cite_27 .
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_4", "@cite_36", "@cite_29", "@cite_42", "@cite_43", "@cite_40", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "2065735671", "2121822142", "2963396489", "2080670320" ], "abstract": [ "Stochastic geometry models for wireless communication networks have recently attracted much attention. This is because the performance of such networks critically depends on the spatial configuration of wireless nodes and the irregularity of the node configuration in a real network can be captured by a spatial point process. However, most analysis of such stochastic geometry models for wireless networks assumes, owing to its tractability, that the wireless nodes are deployed according to homogeneous Poisson point processes. This means that the wireless nodes are located independently of each other and their spatial correlation is ignored. In this work we propose a stochastic geometry model of cellular networks such that the wireless base stations are deployed according to the Ginibre point process. The Ginibre point process is one of the determinantal point processes and accounts for the repulsion between the base stations. For the proposed model, we derive a computable representation for the coverage probability—the probability that the signal-to-interference-plus-noise ratio (SINR) for a mobile user achieves a target threshold. To capture its qualitative property, we further investigate the asymptotics of the coverage probability as the SINR threshold becomes large in a special case. We also present the results of some numerical experiments.", "Stochastic geometry proves to be a powerful tool for modeling dense wireless networks adopting random MAC protocols such as ALOHA and CSMA. The main strength of this methodology lies in its ability to account for the randomness in the nodes' location jointly with an accurate description at the physical layer, based on the SINR, that allows to consider also random fading on each link. Existing models of CSMA networks adopting the stochastic geometry approach suffer from two important weaknesses: 1) they permit to evaluate only spatial averages of the main performance measures, thus hiding possibly huge discrepancies in the performance achieved by individual nodes; 2) they are analytically tractable only when nodes are distributed over the area according to simple spatial processes (e.g., the Poisson point process). In this paper we show how the stochastic geometry approach can be extended to overcome the above limitations, allowing to obtain node throughput distributions as well as to analyze a significant class of topologies in which nodes are not independently placed.", "This paper analyzes an emerging architecture of cellular network utilizing both planar base stations uniformly distributed in the Euclidean plane and base stations located on roads. An example of this architecture is that where, in addition to conventional planar cellular base stations and users, vehicles also play the role of both base stations and users. A Poisson line process is used to model the road network and, conditionally on the lines, linear Poisson point processes are used to model the vehicles on the roads. The conventional planar base stations and users are modeled by the independent planar Poisson point processes. We use Palm calculus to investigate the statistical properties of a typical user in such a network. Specifically, this paper discusses two different Palm distributions, with respect to the user point processes depending on its type: planar or vehicular. We derive the distance to the nearest base station, the association of the typical users, and the coverage probability of the typical user. Furthermore, we provide a comprehensive characterization of coverage of all possible cellular transmissions in this setting, namely, vehicle-to-vehicle, vehicle-to-infrastructure, infrastructure-to-vehicle, and infrastructure-to-infrastructure.", "Broadcast in mobile ad-hoc networks is a challenging and resource demanding task, due to the effects of dynamic network topology and channel randomness. In this paper, we consider 2D wireless ad-hoc networks where nodes are randomly distributed and move following a random direction mobility model. A piece of information is broadcast from an arbitrary node. Based on an in-depth analysis into the popular Susceptible-Infectious-Recovered (SIR) epidemic routing algorithm for mobile ad-hoc networks, an energy and spectrum efficient broadcast scheme is proposed, which is able to adapt to fast-changing network topology and channel randomness. Analytical results are provided to characterize the performance of the proposed scheme, including the fraction of nodes that can receive the information and the delay of information propagation. The accuracy of analytical results is verified using simulations." ] }
1901.10401
2914836496
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
However, modeling the locations of vehicles as a planar Poisson point process is inaccurate since almost surely no more than two points can be found on a line in the planar Poisson point process @cite_21 , and yet the locations of vehicles exhibit a linear pattern when they are on the same straight road. In order to address the location dependencies, a Poisson-line Cox model was proposed in @cite_26 , where roads and vehicles are conditionally generated in the Euclidean plane. More recently, this model was further studied in @cite_9 @cite_33 @cite_45 to derive the signal-to-interference ratio (SIR) distribution of various links between vehicles and mobiles on the plane. These papers analyzed the typical network performance by considering an instantaneous snapshot of the network geometry, under the Palm distribution of the vehicle point process. This paper uses the same approach to characterize short-term performance properties such as the distribution of the SIR and the area spectral efficiency.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_9", "@cite_21", "@cite_45" ], "mid": [ "2963396489", "2963692345", "2889098578", "2060095247" ], "abstract": [ "This paper analyzes an emerging architecture of cellular network utilizing both planar base stations uniformly distributed in the Euclidean plane and base stations located on roads. An example of this architecture is that where, in addition to conventional planar cellular base stations and users, vehicles also play the role of both base stations and users. A Poisson line process is used to model the road network and, conditionally on the lines, linear Poisson point processes are used to model the vehicles on the roads. The conventional planar base stations and users are modeled by the independent planar Poisson point processes. We use Palm calculus to investigate the statistical properties of a typical user in such a network. Specifically, this paper discusses two different Palm distributions, with respect to the user point processes depending on its type: planar or vehicular. We derive the distance to the nearest base station, the association of the typical users, and the coverage probability of the typical user. Furthermore, we provide a comprehensive characterization of coverage of all possible cellular transmissions in this setting, namely, vehicle-to-vehicle, vehicle-to-infrastructure, infrastructure-to-vehicle, and infrastructure-to-infrastructure.", "In this paper, we consider a vehicular network in which the wireless nodes are located on a system of roads. We model the roadways, which are predominantly straight and randomly oriented, by a Poisson line process (PLP) and the locations of nodes on each road as a homogeneous 1D Poisson point process. Assuming that each node transmits independently, the locations of transmitting and receiving nodes are given by two Cox processes driven by the same PLP. For this setup, we derive the coverage probability of a typical receiver, which is an arbitrarily chosen receiving node, assuming independent Nakagami- @math fading over all wireless channels. Assuming that the typical receiver connects to its closest transmitting node in the network, we first derive the distribution of the distance between the typical receiver and the serving node to characterize the desired signal power. We then characterize coverage probability for this setup, which involves two key technical challenges. First, we need to handle several cases as the serving node can possibly be located on any line in the network and the corresponding interference experienced at the typical receiver is different in each case. Second, conditioning on the serving node imposes constraints on the spatial configuration of lines, which requires careful analysis of the conditional distribution of the lines. We address these challenges in order to characterize the interference experienced at the typical receiver. We then derive an exact expression for coverage probability in terms of the derivative of Laplace transform of interference power distribution. We analyze the trends in coverage probability as a function of the network parameters: line density and node density. We also provide some theoretical insights by studying the asymptotic characteristics of coverage probability.", "We study signal-to-interference plus noise ratio (SINR) percolation for Cox point processes, i.e., Poisson point processes with a random intensity measure. SINR percolation was first studied by in the case of a two-dimensional Poisson point process. It is a version of continuum percolation where the connection between two points depends on the locations of all points of the point process. Continuum percolation for Cox point processes was recently studied by Hirsch, Jahnel and Cali. We study the SINR graph model for a stationary Cox point process in two or higher dimensions. We show that under suitable moment or boundedness conditions on the path-loss function and the intensity measure, this graph has an infinite connected component if the spatial density of points is large enough and the interferences are sufficiently reduced (without vanishing). This holds in all dimensions larger than 1 if the intensity measure is asymptotically essentially connected, and in two dimensions also if the intensity measure is only stabilizing but the connection radius is large. A prominent example of the intensity measure is the two-dimensional Poisson-Voronoi tessellation. We show that its total edge length in a given square has some exponential moments. We conclude that its SINR graph has an infinite cluster if the path-loss function is bounded and has a power-law decay of exponent at least 3.", "In this paper we propose a new method for the extraction of roads from remotely sensed images. Under the assumption that roads form a thin network in the image, we approximate such a network by connected line segments. To perform this task, we construct a point process able to simulate and detect thin networks. The segments have to be connected, in order to form a line-network. Aligned segments are favored whereas superposition is penalized. These constraints are enforced by the interaction model (called the Candy model). The specific properties of the road network in the image are described by the data term. This term is based on statistical hypothesis tests. The proposed probabilistic model can be written within a Gibbs point process framework. The estimate for the network is found by minimizing an energy function. In order to avoid local minima, we use a simulated annealing algorithm, based on a Monte Carlo dynamics (RJMCMC) for finite point processes. Results are shown on SPOT, ERS and aerial images." ] }
1901.10401
2914836496
This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius @math centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks.
On the other hand, since vehicles are assumed to cover a wide area as they move on roads, it is essential to analyze the network behavior over time. This paper uses the theory of random closed sets @cite_35 @cite_3 to derive the area fractions of the coverage disks and of the progress of coverage over time, respectively. In addition, as in the literature on delay-tolerant networks @cite_31 @cite_39 @cite_46 @cite_28 @cite_10 or on random networks with data mules @cite_22 @cite_1 @cite_6 , in the proposed network users might incur additional delay for link association when the density of vehicle is small or the speed of vehicle is slow. To quantify this association delay, this paper investigates the network latency by deriving the distribution of the shortest time for a typical roadside device to be covered by any vehicle, or equivalently any disk.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_22", "@cite_28", "@cite_1", "@cite_3", "@cite_39", "@cite_6", "@cite_46", "@cite_10" ], "mid": [ "2057661320", "2113040055", "2144261701", "2038940887" ], "abstract": [ "Vehicular communications are becoming an emerging technology for safety control, traffic control, urban monitoring, pollution control, and many other road safety and traffic efficiency applications. All these applications generate a lot of data which should be distributed among communication parties such as vehicles and users in an efficient manner. On the other hand, the generated data cause a significant load on a network infrastructure, which aims at providing uninterrupted services to the communication parties in an urban scenario. To make a balance of load on the network for such situations in the urban scenario, frequently accessed contents should be cached at specified locations either in the vehicles or at some other sites on the infrastructure providing connectivity to the vehicles. However, due to the high mobility and sparse distribution of the vehicles on the road, sometimes, it is not feasible to place the contents on the existing infrastructure, and useful information generated from the vehicles may not be sent to its final destination. To address this issue, in this paper, we propose a new peer-to-peer (P2P) cooperative caching scheme. To minimize the load on the infrastructure, traffic information among vehicles is shared in a P2P manner using a Markov chain model with three states. The replacement of existing data to accommodate newly arrived data is achieved in a probabilistic manner. The probability is calculated using the time to stay in a waiting state and the frequency of access of a particular data item in a given time interval. The performance of the proposed scheme is evaluated in comparison to those of existing schemes with respect to the metrics such as network congestion, query delay, and hit ratio. Analysis results show that the proposed scheme has reduced the congestion and query delay by 30 with an increase in the hit ratio by 20 .", "Supporting future large-scale vehicular networks is expected to require a combination of fixed roadside infrastructure and mobile in-vehicle technologies. The need for an infrastructure, however, considerably decreases the deployment area of VANET applications. In this paper, we propose a self-organizing mechanism to emulate a geo-localized virtual infrastructure (GVI). This latter is emulated by a bounded-size subset of vehicles currently populating the geographic region where the virtual infrastructure is to be deployed. An analytical model is proposed to study this mechanism. More precisely, this model is proposed to study the GVI in the frame of its main use: data dissemination in VANETs. Despite being simple, the proposed model can accurately predict the system performance such as the probability that a vehicle is informed, and the average number of duplicate messages received by a vehicle, and allows a careful investigation of the impact of vehicular traffic properties and system parameters on performance criteria. Analytical and simulation results show that the proposed GVI mechanism can periodically disseminate the data within an intersection area, efficiently utilize the limited bandwidth and ensure high delivery ratio.", "Distributed processing through ad hoc and sensor networks is having a major impact on scale and applications of computing. The creation of new cyber-physical services based on wireless sensor devices relies heavily on how well communication protocols can be adapted and optimized to meet quality constraints under limited energy resources. The IEEE 802.15.4 medium access control protocol for wireless sensor networks can support energy efficient, reliable, and timely packet transmission by a parallel and distributed tuning of the medium access control parameters. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained devices. In this paper, a Markov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, packet size, and packet copying delay due to hardware limitations is accounted for. The model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while guaranteeing a given successful packet reception probability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 medium access control and can be easily implemented on network devices. The algorithm has been experimentally implemented and evaluated on a testbed with off-the-shelf wireless sensor devices. Experimental results show that the analysis is accurate, that the proposed algorithm satisfies reliability and delay constraints, and that the approach reduces the energy consumption of the network under both stationary and transient conditions. Specifically, even if the number of devices and traffic configuration change sharply, the proposed parallel and distributed algorithm allows the system to operate close to its optimal state by estimating the busy channel and channel access probabilities. Furthermore, results indicate that the protocol reacts promptly to errors in the estimation of the number of devices and in the traffic load that can appear due to device mobility. It is also shown that the effect of imperfect channel and carrier sensing on system performance heavily depends on the traffic load and limited range of the protocol parameters.", "Wireless vehicle-to-vehicle (V2V) and vehicle-toinfrastructure (V2I) communication holds great promise for significantly reducing the human and financial costs of vehicle collisions. A common characteristic of this communication is the broadcast of a device's core state information at regular intervals (e.g., vehicle speed and location or traffic signal state and timing). Unless controlled, the aggregate of these broadcasts will congest the channel under dense traffic scenarios, reducing the effectiveness of collision avoidance applications that use transmitted information. Active congestion control using distributed techniques is a topic of great interest for establishing the scalability of this technology. This paper defines a new adaptive congestion control algorithm that can be applied to the message rate of devices in this vehicular environment. While other published approaches rely on binary control, the LInear MEssage Rate Integrated Control (LIMERIC) algorithm takes advantage of full-precision control inputs that are available on the wireless channel. The result is provable convergence to fair and efficient channel utilization in the deterministic environment, under simple criteria for setting adaptive parameters. This “perfect” convergence avoids the limit cycle behavior that is inherent to binary control. We also discuss several practical aspects associated with implementing LIMERIC, including guidelines for the choice of system parameters to obtain desired utilization outcomes, a gain saturation technique that maintains robust convergence under all conditions, convergence with asynchronous updates, and using channel load to determine the aggregate message rate that is observable at a receiver. This paper also extends the convergence analysis for two important cases, i.e., measurement noise in the input signal and delay in the update process. This paper illustrates key analytical results using MATLAB numerical results and employs standard NS-2 simulations to demonstrate the performance of LIMERIC in several high-density scenarios." ] }
1901.10265
2912727640
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
The study by @cite_22 explored the effects of bias in image search results of occupations on the perception of people of that occupation. The major aim of the study was to understand whether the biased portrayal of minorities in image search results leads to stereotypes or not. Such a phenomenon has been observed in other forms of media like television @cite_20 . @cite_12 also showed that the annotated datasets of English and German, used for various NLP tasks and tools, are age-biased. Studies like these have brought to light the problem of bias in common ML algorithms and led to a surge of research in fair algorithms. In the field of computer vision, @cite_17 showed that the existing facial analysis datasets are biased with respect to gender and skin type. Summarization algorithms using such datasets can lead to biased results and hence a feedback loop . Correspondingly, it becomes important to develop summarization algorithms that ensure visible diversity'' even when using biased datasets.
{ "cite_N": [ "@cite_12", "@cite_22", "@cite_20", "@cite_17" ], "mid": [ "2149252982", "2584117724", "2507358938", "2796868841" ], "abstract": [ "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societ al goals for equality of representation in this space.", "When analysing human activities using data mining or machine learning techniques, it can be useful to infer properties such as the gender or age of the people involved. This paper focuses on the sub-problem of gender recognition, which has been studied extensively in the literature, with two main problems remaining unsolved: how to improve the accuracy on real-world face images, and how to generalise the models to perform well on new datasets. We address these problems by collecting five million weakly labelled face images, and performing three different experiments, investigating: the performance difference between convolutional neural networks (CNNs) of differing depths and a support vector machine approach using local binary pattern features on the same training data, the effect of contextual information on classification accuracy, and the ability of convolutional neural networks and large amounts of training data to generalise to cross-database classification. We report record-breaking results on both the Labeled Faces in the Wild (LFW) dataset, achieving an accuracy of 98.90 , and the Images of Groups (GROUPS) dataset, achieving an accuracy of 91.34 for cross-database gender classification.", "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societ al and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [, 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community. The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.", "We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at this http URL" ] }
1901.10265
2912727640
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
There are other works that attempt to ensure diversity in the learning algorithm without using gender or race labels. @cite_23 consider the problem of gender bias in word embeddings trained on Google News articles and provide methods to modify the embeddings to debias them. Through standard gender-related words, they identify the direction of the gender bias in given word embeddings and then attempt to remove it.
{ "cite_N": [ "@cite_23" ], "mid": [ "2950018712", "2483215953", "2921633540", "2888167352" ], "abstract": [ "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to \"debias\" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and consistent across different word embedding models, causing serious concern. Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. However, we argue that this removal is superficial. While the bias is indeed substantially reduced according to the provided bias definition, the actual effect is mostly hiding the bias, not removing it. The gender bias information is still reflected in the distances between \"gender-neutralized\" words in the debiased embeddings, and can be recovered from them. We present a series of experiments to support this claim, for two debiasing methods. We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling.", "Abusive language detection models tend to have a problem of being biased toward identity words of a certain group of people because of imbalanced training datasets. For example, \"You are a good woman\" was considered \"sexist\" when trained on an existing dataset. Such model bias is an obstacle for models to be robust enough for practical use. In this work, we measure gender biases on models trained with different abusive language datasets, while analyzing the effect of different pre-trained word embeddings and model architectures. We also experiment with three bias mitigation methods: (1) debiased word embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a larger corpus. These methods can effectively reduce gender bias by 90-98 and can be extended to correct model bias in other scenarios." ] }
1901.10265
2912727640
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
To identify image similarity, a number of techniques have been explored @cite_25 . The usual techniques before the use of Convolutional Neural Networks included the following: Blob detection @cite_15 , involves finding the part of the image which is consistent across all images, Template matching @cite_7 , where we are given a template image against which all other images are to be matched after pre-processing steps, and SURF feature extractor @cite_9 which detects local feature, generates their description and then matches these features across images.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_25", "@cite_7" ], "mid": [ "1772650917", "2963502507", "2526782364", "2509155366" ], "abstract": [ "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Automatically detecting illustrations is needed for the target system.Deep Convolutional Neural Networks have been successful in computer vision tasks.DCNN with fine-tuning outperformed the other models including handcrafted features. Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58 . On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8 accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs." ] }
1901.10265
2912727640
Case studies, such as , 2015 have shown that in image summarization, such as with Google Image Search, the people in the results presented for occupations are more imbalanced with respect to sensitive attributes such as gender and ethnicity than the ground truth. Most of the existing approaches to correct for this problem in image summarization assume that the images are labelled and use the labels for training the model and correcting for biases. However, these labels may not always be present. Furthermore, it is often not possible (nor even desirable) to automatically classify images by sensitive attributes such as gender or race. Moreover, balancing according to the labels does not guarantee that the diversity will be visibly apparent - arguably the only metric that matters when selecting diverse images. We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to produce images in response to a query which is similarly visibly diverse. We implement this approach using pre-trained and modified Convolutional Neural Networks like VGG-16, and evaluate our approach empirically on the Image dataset compiled and used by , 2015. We compare our results with the Google Image Search results from , 2015 and natural baselines and observe that our algorithm produces images that are accurate with respect to their similarity to the query images (on par with that of the Google Image Search results), but significantly outperforms with respect to visible diversity as measured by their similarity to our diverse control set.
This method of using pre-trained models for other tasks is also called transfer learning''. This technique has been used in many other classification tasks, such thoraco-abdominal lymph node detection and interstitial lung disease classification @cite_2 , or object and action classification @cite_1 , and has shown significant improvement compared to previous work.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2253429366", "2560476520", "2886327376", "2321533354" ], "abstract": [ "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.", "Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis (CAD) systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2 in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.", "In this work, we exploit the task of joint classification and weakly supervised localization of thoracic diseases from chest radiographs, with only image-level disease labels coupled with disease severity-level (DSL) information of a subset. A convolutional neural network (CNN) based attention-guided curriculum learning (AGCL) framework is presented, which leverages the severity-level attributes mined from radiology reports. Images in order of difficulty (grouped by different severity-levels) are fed to CNN to boost the learning gradually. In addition, highly confident samples (measured by classification probabilities) and their corresponding class-conditional heatmaps (generated by the CNN) are extracted and further fed into the AGCL framework to guide the learning of more distinctive convolutional features in the next iteration. A two-path network architecture is designed to regress the heatmaps from selected seed samples in addition to the original classification task. The joint learning scheme can improve the classification and localization performance along with more seed samples for the next iteration. We demonstrate the effectiveness of this iterative refinement framework via extensive experimental evaluations on the publicly available ChestXray14 dataset. AGCL achieves over 5.7 (averaged over 14 diseases) increase in classification AUC and 7 11 increases in Recall Precision for the localization task compared to the state of the art.", "We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively)." ] }
1901.10240
2914485273
Style transfer is a technique for combining two images based on the activations and feature statistics in a deep learning neural network architecture. This paper studies the analogous task in the audio domain and takes a critical look at the problems that arise when adapting the original vision-based framework to handle spectrogram representations. We conclude that CNN architectures with features based on 2D representations and convolutions are better suited for visual images than for time–frequency representations of audio. Despite the awkward fit, experiments show that the Gram matrix determined “style” for audio is more closely aligned with timbral signatures without temporal structure, whereas network layer activity determining audio “content” seems to capture more of the pitch and rhythmic structures. We shed insight on several reasons for the domain differences with illustrative examples. We motivate the use of several types of one-dimensional CNNs that generate results that are better aligned with intuitive notions of audio texture than those based on existing architectures built for images. These ideas also prompt an exploration of audio texture synthesis with architectural variants for extensions to infinite textures, multi-textures, parametric control of receptive fields and the constant-Q transform as an alternative frequency scaling for the spectrogram.
If articulating the intuitive distinction between style and content for images is difficult, it is even more so for sound, in particular non-speech sound. The recent use of neural network-derived statistics to describe such perceptual qualities for texture synthesis @cite_0 and style transfer @cite_22 offers a fresh computational outlook on the subject. This paper examines several issues with these image-based techniques when adapting them for the audio domain.
{ "cite_N": [ "@cite_0", "@cite_22" ], "mid": [ "2766465839", "2585235684", "2302243225", "2603777577" ], "abstract": [ "“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.", "Recently, methods have been proposed that perform texture synthesis and style transfer by using convolutional neural networks (e.g. [2015,2016]). These methods are exciting because they can in some cases create results with state-of-the-art quality. However, in this paper, we show these methods also have limitations in texture quality, stability, requisite parameter tuning, and lack of user controls. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.", "Convolutional neural networks (CNNs) have proven highly effective at image synthesis and style transfer. For most users, however, using them as tools can be a challenging task due to their unpredictable behavior that goes against common intuitions. This paper introduces a novel concept to augment such generative architectures with semantic annotations, either by manually authoring pixel labels or using existing solutions for semantic segmentation. The result is a content-aware generative algorithm that offers meaningful control over the outcome. Thus, we increase the quality of images generated by avoiding common glitches, make the results look significantly more plausible, and extend the functional range of these algorithms---whether for portraits or landscapes, etc. Applications include semantic style transfer and turning doodles with few colors into masterful paintings!", "recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network." ] }
1901.10240
2914485273
Style transfer is a technique for combining two images based on the activations and feature statistics in a deep learning neural network architecture. This paper studies the analogous task in the audio domain and takes a critical look at the problems that arise when adapting the original vision-based framework to handle spectrogram representations. We conclude that CNN architectures with features based on 2D representations and convolutions are better suited for visual images than for time–frequency representations of audio. Despite the awkward fit, experiments show that the Gram matrix determined “style” for audio is more closely aligned with timbral signatures without temporal structure, whereas network layer activity determining audio “content” seems to capture more of the pitch and rhythmic structures. We shed insight on several reasons for the domain differences with illustrative examples. We motivate the use of several types of one-dimensional CNNs that generate results that are better aligned with intuitive notions of audio texture than those based on existing architectures built for images. These ideas also prompt an exploration of audio texture synthesis with architectural variants for extensions to infinite textures, multi-textures, parametric control of receptive fields and the constant-Q transform as an alternative frequency scaling for the spectrogram.
While all the related works studied certain aspects of the problem, none go into detail on the challenges posed by the nature of sound and how it is represented, especially in relation to what is essentially a vision inspired model in the CNN. Hence, the focus of this paper is not a presentation of state-of-the-art audio style transfer but an analysis of the issues involved in adopting existing style transfer and texture synthesis mechanisms for audio. In paintings, Gatys' style formulation preserves brush strokes, including, to a degree, texture, direction, and colour information, leading up to larger spatial scale motifs (such as the swirls in the sky in Starry Starry Night, see Fig.1 in @cite_22 ) as the receptive field grows larger deeper in the network. How these style concepts translate to style in the audio domain is not straightforward and forms part of the analysis here.
{ "cite_N": [ "@cite_22" ], "mid": [ "2782490852", "2766465839", "2275086408", "2604721644" ], "abstract": [ "There has been fascinating work on creating artistic transformations of images by Gatys. This was revolutionary in how we can in some sense alter the 'style' of an image while generally preserving its 'content'. In our work, we present a method for creating new sounds using a similar approach, treating it as a style-transfer problem, starting from a random-noise input signal and iteratively using back-propagation to optimize the sound to conform to filter-outputs from a pre-trained neural architecture of interest. For demonstration, we investigate two different tasks, resulting in bandwidth expansion compression, and timbral transfer from singing voice to musical instruments. A feature of our method is that a single architecture can generate these different audio-style-transfer types using the same set of parameters which otherwise require different complex hand-tuned diverse signal processing pipelines.", "“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.", "We explore the method of style transfer presented in the article \"A Neural Algorithm of Artistic Style\" by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge (arXiv:1508.06576). We first demonstrate the power of the suggested style space on a few examples. We then vary different hyper-parameters and program properties that were not discussed in the original paper, among which are the recognition network used, starting point of the gradient descent and different ways to partition style and content layers. We also give a brief comparison of some of the existing algorithm implementations and deep learning frameworks used. To study the style space further we attempt to generate synthetic images by maximizing a single entry in one of the Gram matrices @math and some interesting results are observed. Next, we try to mimic the sparsity and intensity distribution of Gram matrices obtained from a real painting and generate more complex textures. Finally, we propose two new style representations built on top of network's features and discuss how one could be used to achieve local and potentially content-aware style transfer.", "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits." ] }
1901.10441
2912162169
Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment.
Significant prior literature focuses on performing passive measurement inferences. In addition to legitimate traffic, non-trivial levels of background radiation'' @cite_41 arrive at networks due to self-propagating malware, security scanners, and attacks. Casado al show the wealth of information that can be gleaned passively @cite_9 , while Durairajan al leverage NTP server logs to estimate Internet latencies @cite_38 . Dainotti al demonstrate how background radiation @cite_37 provides insight into global outage and censorship events @cite_12 . Finally, Sargent al infer network policies from traffic arriving at darknets @cite_7 . While such opportunistic measurement is powerful, analysis and inference is complicated by the vagaries of attacks, the spread and mitigation of malware, and what networks are affected. In addition to generalizing opportunistic measurement, we show that periodic IHBs permit stronger probabilistic inferences.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_41", "@cite_9", "@cite_12" ], "mid": [ "2158060559", "2770205545", "2785077884", "2535351647" ], "abstract": [ "Monitoring any portion of the Internet address space reveals incessant activity. This holds even when monitoring traffic sent to unused addresses, which we term \"background radiation. \" Background radiation reflects fundamentally nonproductive traffic, either malicious (flooding backscatter, scans for vulnerabilities, worms) or benign (misconfigurations). While the general presence of background radiation is well known to the network operator community, its nature has yet to be broadly characterized. We develop such a characterization based on data collected from four unused networks in the Internet. Two key elements of our methodology are (i) the use of filtering to reduce load on the measurement system, and (ii) the use of active responders to elicit further activity from scanners in order to differentiate different types of background radiation. We break down the components of background radiation by protocol, application, and often specific exploit; analyze temporal patterns and correlated activity; and assess variations across different networks and over time. While we find a menagerie of activity, probes from worms and autorooters heavily dominate. We conclude with considerations of how to incorporate our characterizations into monitoring and detection activities.", "Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually \"plausible\" and physically \"feasible\" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.", "Incorporating encoding-decoding nets with adversarial nets has been widely adopted in image generation tasks. We observe that the state-of-the-art achievements were obtained by carefully balancing the reconstruction loss and adversarial loss, and such balance shifts with different network structures, datasets, and training strategies. Empirical studies have demonstrated that an inappropriate weight between the two losses may cause instability, and it is tricky to search for the optimal setting, especially when lacking prior knowledge on the data and network. This paper gives the first attempt to relax the need of manual balancing by proposing the concept of , where a novel network structure is designed that explicitly disentangles the backpropagation paths of the two losses. Experimental results demonstrate the effectiveness, robustness, and generality of the proposed method. The other contribution of the paper is the design of a new evaluation metric to measure the image quality of generative models. We propose the so-called (NRDS), which introduces the idea of relative comparison, rather than providing absolute estimates like existing metrics.", "For achieving optimized spectrum usage, most existing opportunistic spectrum sensing and access protocols model the spectrum sensing and access problem as a partially observed Markov decision process by assuming that the information states and or the primary users' (PUs) traffic statistics are known a priori to the secondary users (SUs). While theoretically sound, the existing solutions may not be effective in practice due to two main concerns. First, the assumptions are not practical, as before the communication starts, PUs' traffic statistics may not be readily available to the SUs. Second and more serious, existing approaches are extremely vulnerable to malicious jamming attacks. By leveraging the same statistic information and stochastic dynamic decision-making process that the SUs would follow, a cognitive attacker with sensing capability can sense and jam the channels to be accessed by SUs, while not interfering PUs. To address these concerns, we formulate the antijamming, multichannel access problem as a nonstochastic multi-armed bandit problem. By leveraging probabilistically shared information between the sender and the receiver, our proposed protocol enables them to hop to the same set of channels with high probability while gaining resilience to jamming attacks without affecting PUs' activities. We analytically show the convergence of the learning algorithms and derive the performance bound based on regret . We further discuss the problem of tracking the best adaptive strategy and characterize the performance bound based on a new regret . Extensive simulation results show that the probabilistic spectrum sensing and access protocol can overcome the limitation of existing solutions and is highly resilient to various jamming attacks even with jammed acknowledgment (ACK) information." ] }
1901.10441
2912162169
Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment.
Individual networks frequently perform regular pair-wise measurements between nodes or networks under their control, Content Distribution Networks (CDNs) that run continual measurements to detect and route around path problems @cite_39 . The IHB seeks to push such functionality deeper into the network stack such that all networks are empowered with such knowledge without having to implement their own application-layer protocols and measurements. As importantly, the IHB disseminates global knowledge about the Internet, rather than focusing on an individual network.
{ "cite_N": [ "@cite_39" ], "mid": [ "2144363069", "2013180246", "2153517122", "63809527" ], "abstract": [ "Content distribution networks (CDNs) need to make decisions, such as server selection and routing, to improve performance for their clients. The performance may be limited by various factors such as packet loss in the network, a small receive buffer at the client, or constrained server CPU and disk resources. Conventional measurement techniques are not effective for distinguishing these performance problems: application-layer logs are too coarse-grained, while network-level traces are too expensive to collect all the time. We argue that passively monitoring the transport-level statistics in the server's network stack is a better approach. This paper presents a tool for monitoring and analyzing TCP statistics, and an analysis of a CoralCDN node in PlanetLab for six weeks. Our analysis shows that more than 10 of connections are server-limited at least 40 of the time, and many connections are limited by the congestion window despite no packet loss. Still, we see that clients in 377 Autonomous Systems (ASes) experience persistent packet loss. By separating network congestion from other performance problems, our analysis provides a much more accurate view of the performance of the network paths than what is possible with server logs alone.", "In multi-hop ad hoc networks, stations may pump more traffic into the networks than can be supported, resulting in high packet-loss rate, re-routing instability and unfairness problems. This paper shows that controlling the offered load at the sources can eliminate these problems. To verify the simulation results, we set up a real 6-node multi-hop network. The experimental measurements confirm the existence of the optimal offered load. In addition, we provide an analysis to estimate the optimal offered load that maximizes the throughput of a multi-hop traffic flow. We believe this is a first paper in the literature to provide a quantitative analysis (as opposed to simulation) for the impact of hidden nodes and signal capture on sustainable throughput. The analysis is based on the observation that a large-scale 802.11 network with hidden nodes is a network in which the carrier-sensing capability breaks down partially. Its performance is therefore somewhere between that of a carrier-sensing network and that of an Aloha network. Indeed, our analytical closed-form solution has the appearance of the throughput equation of the Aloha network. Our approach allows one to identify whether the performance of an 802.11 network is hidden-node limited or spatial-reuse limited.", "In wireless mesh networks (WMNs) traffic is routed from mobile clients through a multihop wireless backbone to and from Internet gateways (IGWs). Because of their limited number, IGWs become the major traffic bottlenecks. The purpose of this work is to explore the benefits of introducing load-dependent routing metrics to increase WMN capacity and performance. We use weighted shortest path routing and introduce LAETT a weight metric that captures both traffic load and link quality. We compare the scheme to ETT and MIC, two load independent metrics, and show in simulation its benefits for various network and traffic configurations.", "Wireless Sensor Networks (WSNs) are being designed to solve a gamut of interesting real-world problems. Limitations on available energy and bandwidth, message loss, high rates of node failure, and communication restrictions pose challenging requirements for these systems. Beyond these inherent limitations, both the possibility of node mobility and energy conserving protocols that power down nodes introduce additional complexity to routing protocols that depend on up to date routing or neighborhood tables. Such state-based protocols suffer excessive delay or message loss, as system dynamics require expensive upkeep of these tables. Utilizing characteristics of high node density and location awareness, we introduce IGF, a location-aware routing protocol that is robust and works without knowledge of the existence of neighboring nodes (state-free). We compare our work against established routing protocols to demonstrate the efficacy of our solution when nodes are mobile or periodically sleep to conserve energy. We show that IGF far outperforms these protocols, in some cases delivering close to 100 of the packets transmitted while alternate solutions fail to even find a path between a source and destination. Specifically, we show that our protocol demonstrates a vast improvement over prior work using metrics of delivery ratio, control overhead, and end-to-end delay." ] }
1901.10435
2919910909
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Conventional approaches for mathematical modeling and representation of human movements are broadly classified into two categories: top-down approaches that introduce latent states for describing the temporal dynamics of the movements, and bottom-up approaches that employ local features for representing the movements. Commonly used methods in the first category include Kalman filters @cite_0 , hidden Markov models @cite_3 , and Gaussian mixture models @cite_36 . The main shortcomings of these methods originate from employing linear models for the transitions among the latent states (as in Kalman filters), or from adopting simple internal structures of the latent states (typical for hidden Markov models). The approaches based on extracting local features employ predefined criteria for identifying key points @cite_13 or a collection of statistics of the movements (e.g., mean, standard deviation, mode, median) @cite_22 . Such local features are typically motion-specific, which limits the ability to efficiently handle arbitrary spatio-temporal variations within movement data.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_3", "@cite_0", "@cite_13" ], "mid": [ "41309742", "2151214862", "2042919965", "2099000909" ], "abstract": [ "This paper presents an approach to predict future motion of a moving object based on its past movement. This approach is capable of learning object movement in an open environment, which is one of the limitions in some prior works. The proposed approach exploits the similarities of short-term movement behaviors by modeling a trajectory as concatenation of short segments. These short segments are assumed to be noisy realizations of latent segments. The transitions between the underlying latent segments are assumed to follow a Markov model. This predictive model was applied to two real-world applications and yielded favorable performance on both tasks.", "Abstract We describe algorithms for recognizing human motion in monocular video sequences, based on discriminative conditional random fields (CRFs) and maximum entropy Markov models (MEMMs). Existing approaches to this problem typically use generative structures like the hidden Markov model (HMM). Therefore, they have to make simplifying, often unrealistic assumptions on the conditional independence of observations given the motion class labels and cannot accommodate rich overlapping features of the observation or long-term contextual dependencies among observations at multiple timesteps. This makes them prone to myopic failures in recognizing many human motions, because even the transition between simple human activities naturally has temporal segments of ambiguity and overlap. The correct interpretation of these sequences requires more holistic, contextual decisions, where the estimate of an activity at a particular timestep could be constrained by longer windows of observations, prior and even posterior to that timestep. This would not be computationally feasible with a HMM which requires the enumeration of a number of observation sequences exponential in the size of the context window. In this work we follow a different philosophy: instead of restrictively modeling the complex image generation process – the observation, we work with models that can unrestrictedly take it as an input, hence condition on it. Conditional models like the proposed CRFs seamlessly represent contextual dependencies and have computationally attractive properties: they support efficient, exact recognition using dynamic programming, and their parameters can be learned using convex optimization. We introduce conditional graphical models as complementary tools for human motion recognition and present an extensive set of experiments that show not only how these can successfully classify diverse human activities like walking, jumping, running, picking or dancing, but also how they can discriminate among subtle motion styles like normal walks and wander walks.", "We propose a unified model for human motion prior with multiple actions. Our model is generated from sample pose sequences of the multiple actions, each of which is recorded from real human motion. The sample sequences are connected to each other by synthesizing a variety of possible transitions among the different actions. For kinematically-realistic transitions, our model integrates nonlinear probabilistic latent modeling of the samples and interpolation-based synthesis of the transition paths. While naive interpolation makes unexpected poses, our model rejects them (1) by searching for smooth and short transition paths by employing the good properties of the observation and latent spaces and (2) by avoiding using samples that unexpectedly synthesize the nonsmooth interpolation. The effectiveness of the model is demonstrated with real data and its application to human pose tracking.", "In this paper a bottom-up approach for human behaviour understanding is presented, using a multi-camera system. The proposed methodology, given a training set of normal data only, classifies behaviour as normal or abnormal, using two different criteria of human behaviour abnormality (short-term behaviour and trajectory of a person). Within this system an one-class support vector machine decides short-term behaviour abnormality, while we propose a methodology that lets a continuous Hidden Markov Model function as an one-class classifier for trajectories. Furthermore, an approximation algorithm, referring to the Forward Backward procedure of the continuous Hidden Markov Model, is proposed to overcome numerical stability problems in the calculation of probability of emission for very long observations. It is also shown that multiple cameras through homography estimation provide more precise position of the person, leading to more robust system performance. Experiments in an indoor environment without uniform background demonstrate the good performance of the system." ] }
1901.10435
2919910909
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Recent developments in artificial NNs stirred significant interest in their application for modeling and analysis of human motions. Numerous works employed NNs for motion classification and applied the trained models for activity recognition, gait identification, gesture recognition, action localization, and related applications. NN-based motion classifiers utilizing different computational units have been proposed, including convolutional units @cite_4 , @cite_24 , long short-term memory (LSTM) recurrent units @cite_19 , @cite_27 , gated recurrent units @cite_16 , and combinations @cite_6 or modifications of these computational units @cite_31 . Also, NNs with different layer structures have been implemented, such as encoder-decoder networks @cite_27 , spatio-temporal graphs @cite_9 , and attention mechanism models @cite_33 , @cite_15 . Besides the task of classification, a body of work in the literature focused on modeling and representation of human movements for prediction of future motion patterns @cite_41 , synthesis of movement sequences @cite_27 , and density estimation @cite_39 . Conversely, little research has been conducted on the application of NNs for evaluation of movement quality, which can otherwise find use in various applications (physical rehabilitation being one of them).
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_15", "@cite_41", "@cite_9", "@cite_6", "@cite_39", "@cite_24", "@cite_19", "@cite_27", "@cite_31", "@cite_16" ], "mid": [ "2963447094", "2779380177", "2808523546", "2746131160" ], "abstract": [ "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities.", "Recent studies demonstrate the effectiveness of Recurrent Neural Networks (RNNs) for action recognition in videos. However, previous works mainly utilize video-level category as supervision to train RNNs, which may prohibit RNNs to learn complex motion structures along time. In this paper, we propose a recurrent pose-attention network (RPAN) to address this challenge, where we introduce a novel pose-attention mechanism to adaptively learn pose-related features at every time-step action prediction of RNNs. More specifically, we make three main contributions in this paper. Firstly, unlike previous works on pose-related action recognition, our RPAN is an end-toend recurrent network which can exploit important spatialtemporal evolutions of human pose to assist action recognition in a unified framework. Secondly, instead of learning individual human-joint features separately, our poseattention mechanism learns robust human-part features by sharing attention parameters partially on the semanticallyrelated human joints. These human-part features are then fed into the human-part pooling layer to construct a highlydiscriminative pose-related representation for temporal action modeling. Thirdly, one important byproduct of our RPAN is pose estimation in videos, which can be used for coarse pose annotation in action videos. We evaluate the proposed RPAN quantitatively and qualitatively on two popular benchmarks, i.e., Sub-JHMDB and PennAction. Experimental results show that RPAN outperforms the recent state-of-the-art methods on these challenging datasets.", "This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24 by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53 .", "Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities." ] }
1901.10435
2919910909
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Several studies in the literature on exercise evaluation employed machine learning methods to classify the individual repetitions into of movements. Methods used for this purpose include Adaboost classifier @cite_18 , @math -nearest neighbors @cite_30 , Bayesian classifier @cite_26 , and an ensemble of multi-layer perceptron NNs @cite_32 . The outputs in these approaches are discrete class values of @math or @math (i.e., incorrect or correct repetition). However, these methods do not provide the capacity to detect varying levels of movement quality or identify incremental changes in patient performance over the program duration.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_32" ], "mid": [ "1581587400", "2036735563", "2051177122", "2142279020" ], "abstract": [ "In classification, when the distribution of the training data among classes is uneven, the learning algorithm is generally dominated by the feature of the majority classes. The features in the minority classes are normally difficult to be fully recognized. In this paper, a method is proposed to enhance the classification accuracy for the minority classes. The proposed method combines Synthetic Minority Over-sampling Technique (SMOTE) and Complementary Neural Network (CMTNN) to handle the problem of classifying imbalanced data. In order to demonstrate that the proposed technique can assist classification of imbalanced data, several classification algorithms have been used. They are Artificial Neural Network (ANN), k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM). The benchmark data sets with various ratios between the minority class and the majority class are obtained from the University of California Irvine (UCI) machine learning repository. The results show that the proposed combination techniques can improve the performance for the class imbalance problem.", "Machine learning often relies on costly labeled data, and this impedes its application to new classification and information extraction problems. This has motivated the development of methods for leveraging abundant prior knowledge about these problems, including methods for lightly supervised learning using model expectation constraints. Building on this work, we envision an interactive training paradigm in which practitioners perform evaluation, analyze errors, and provide and refine expectation constraints in a closed loop. In this paper, we focus on several key subproblems in this paradigm that can be cast as selecting a representative sample of the unlabeled data for the practitioner to inspect. To address these problems, we propose stratified sampling methods that use model expectations as a proxy for latent output variables. In classification and sequence labeling experiments, these sampling strategies reduce accuracy evaluation effort by as much as 53 , provide more reliable estimates of @math for rare labels, and aid in the specification and refinement of constraints.", "Summary Background—Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives—The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods—Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results—Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions—Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.", "Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks." ] }
1901.10435
2919910909
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
The majority of related studies employed for deriving movement quality scores. Concretely, @cite_22 used a variant of the Mahalanobis distance to quantify the level of correctness of rehabilitation movements, based on a calculated distance between patient-performed repetitions and a set of repetitions performed by a group of healthy individuals. Similarly, a body of work utilized the dynamic time warping (DTW) algorithm @cite_35 for calculating the distance between a patients performance and healthy subjects performance @cite_34 -- @cite_17 . The advantage of the distance functions is that they are not exercise-specific, and thus can be applied for assessment of new types of exercises. However, the distance functions also have shortcomings, because they do not attempt to derive a model of the rehabilitation data, and the distances are calculated at the level of individual time-steps in the raw measurements.
{ "cite_N": [ "@cite_35", "@cite_34", "@cite_22", "@cite_17" ], "mid": [ "2008348094", "2778538683", "2106053110", "58346954" ], "abstract": [ "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems.", "Abstract In this paper, a Hidden Semi-Markov Model (HSMM) based approach is proposed to evaluate and monitor body motion during a rehabilitation training program. The approach extracts clinically relevant motion features from skeleton joint trajectories, acquired by the RGB-D camera, and provides a score for the subject’s performance. The approach combines different aspects of rule and template based methods. The features have been defined by clinicians as exercise descriptors and are then assessed by a HSMM, trained upon an exemplar motion sequence. The reliability of the proposed approach is studied by evaluating its correlation with both a clinical assessment and a Dynamic Time Warping (DTW) algorithm, while healthy and neurological disabled people performed physical exercises. With respect to the discrimination between healthy and pathological conditions, the HSMM based method correlates better with the physician’s score than DTW. The study supports the use of HSMMs to assess motor performance providing a quantitative feedback to physiotherapist and patients. This result is particularly appropriate and useful for a remote assessment in the home.", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.", "It has long been known that Dynamic Time Warping (DTW) is superior to Euclidean distance for classification and clustering of time series. However, until lately, most research has utilized Euclidean distance because it is more efficiently calculated. A recently introduced technique that greatly mitigates DTWs demanding CPU time has sparked a flurry of research activity. However, the technique and its many extensions still only allow DTW to be applied to moderately large datasets. In addition, almost all of the research on DTW has focused exclusively on speeding up its calculation; there has been little work done on improving its accuracy. In this work, we target the accuracy aspect of DTW performance and introduce a new framework that learns arbitrary constraints on the warping path of the DTW calculation. Apart from improving the accuracy of classification, our technique as a side effect speeds up DTW by a wide margin as well. We show the utility of our approach on datasets from diverse domains and demonstrate significant gains in accuracy and efficiency. E u clid ean D istan c e D yn am ic T im e W arp in g D is tan ce Figure 1: Note that while the two time series have an overall similar shape, they are not aligned in the time axis. Euclidean distance, which assumes the i point in one sequence is aligned with the i point in the other, will produce a pessimistic dissimilarity measure. The non-linear Dynamic Time Warped alignment allows a more intuitive distance measure to be calculated." ] }
1901.10435
2919910909
The article proposes a new framework for assessment of physical rehabilitation exercises based on a deep learning approach. The objective of the framework is automated quantification of patient performance in completing prescribed rehabilitation exercises, based on captured whole-body joint trajectories. The main components of the framework are metrics for measuring movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for regressing quality scores of input movements via supervised learning. Furthermore, an overview of the existing methods for modeling and evaluation of rehabilitation movements is presented, encompassing various distance functions, dimensionality-reduction techniques, and movement models employed for this problem in prior studies. To the best of our knowledge, this is the first work that implements deep neural network for assessment of rehabilitation performance. Multiple deep network architectures are repurposed for the task in hand and are validated on a dataset of rehabilitation exercises.
Another body of research work utilized for modeling and evaluation of rehabilitation movements. Studies based on hidden Markov models @cite_14 , @cite_37 and mixtures of Gaussian distributions @cite_39 typically perform a quality assessment based on the likelihood that the individual sequences are being drawn from a trained model. Whereas the probabilistic models are advantageous in handling the variability due to the stochastic character of human movements, models with abilities for a hierarchical data representation can produce more reliable outcomes for movement quality assessment, and better generalize to new exercises.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_39" ], "mid": [ "2778538683", "2951446714", "2036502167", "1968953480" ], "abstract": [ "Abstract In this paper, a Hidden Semi-Markov Model (HSMM) based approach is proposed to evaluate and monitor body motion during a rehabilitation training program. The approach extracts clinically relevant motion features from skeleton joint trajectories, acquired by the RGB-D camera, and provides a score for the subject’s performance. The approach combines different aspects of rule and template based methods. The features have been defined by clinicians as exercise descriptors and are then assessed by a HSMM, trained upon an exemplar motion sequence. The reliability of the proposed approach is studied by evaluating its correlation with both a clinical assessment and a Dynamic Time Warping (DTW) algorithm, while healthy and neurological disabled people performed physical exercises. With respect to the discrimination between healthy and pathological conditions, the HSMM based method correlates better with the physician’s score than DTW. The study supports the use of HSMMs to assess motor performance providing a quantitative feedback to physiotherapist and patients. This result is particularly appropriate and useful for a remote assessment in the home.", "We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.", "We present algorithms for recognizing human motion in monocular video sequences, based on discriminative conditional random field (CRF) and maximum entropy Markov models (MEMM). Existing approaches to this problem typically use generative (joint) structures like the hidden Markov model (HMM). Therefore they have to make simplifying, often unrealistic assumptions on the conditional independence of observations given the motion class labels and cannot accommodate overlapping features or long term contextual dependencies in the observation sequence. In contrast, conditional models like the CRFs seamlessly represent contextual dependencies, support efficient, exact inference using dynamic programming, and their parameters can be trained using convex optimization. We introduce conditional graphical models as complementary tools for human motion recognition and present an extensive set of experiments that show how these typically outperform HMMs in classifying not only diverse human activities like walking, jumping. running, picking or dancing, but also for discriminating among subtle motion styles like normal walk and wander walk", "In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities, by using an Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch reestimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model." ] }
1901.10443
2911457688
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
The idea of adversarial machine learning was popularized by the introduction of Generative Adversarial Networks (GANs) @cite_13 . Based on similar ideas, multiple learning algorithms have been suggested to generate fair classifiers using adversaries.
{ "cite_N": [ "@cite_13" ], "mid": [ "2616969219", "2412510955", "2612866063", "2178768799" ], "abstract": [ "Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach.", "We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.", "We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks. The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image. Simultaneously, the classifier network is trained to classify correctly both original and adversarial images generated by the generator. These procedures help the classifier network to become more robust to adversarial perturbations. Furthermore, our adversarial training framework efficiently reduces overfitting and outperforms other regularization methods such as Dropout. We applied our method to supervised learning for CIFAR datasets, and experimantal results show that our method significantly lowers the generalization error of the network. To the best of our knowledge, this is the first method which uses GAN to improve supervised learning.", "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM)." ] }
1901.10443
2911457688
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
As mentioned earlier, @cite_0 proposed a model to learn a fair classifier based on the idea of adversarial debiasing. Their algorithm also uses the gradient descent with the modified update but does not include any theoretical guarantees on convergence, and experimentally, they suggest a model to just ensure equalized odds for the Adult dataset. @cite_34 use a similar adversarial model for COMPAS dataset @cite_27 , but do not use the modified update for optimization.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_34" ], "mid": [ "2964268978", "2902208421", "2523469089", "2687693326" ], "abstract": [ "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Motivated by settings in which predictive models may be required to be non-discriminatory with respect to certain attributes (such as race), but even collecting the sensitive attribute may be forbidden or restricted, we initiate the study of fair learning under the constraint of differential privacy. We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a 'fairness' condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a private implementation of the equalized odds post-processing approach of [, 2016]. This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'. Our second algorithm is a differentially private version of the oracle-efficient in-processing approach of [, 2018] that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm is more complex but need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time. We conclude with a brief experimental evaluation.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the \"Frechet Inception Distance\" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark." ] }
1901.10443
2911457688
Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research. One important paradigm towards this has been providing algorithms for adversarially learning fair classifiers (, 2018; , 2018). We formulate the adversarial learning problem as a multi-objective optimization problem and find the fair model using gradient descent-ascent algorithm with a modified gradient update step, inspired by the approach of , 2018. We provide theoretical insight and guarantees that formalize the heuristic arguments presented previously towards taking such an approach. We test our approach empirically on the Adult dataset and synthetic datasets and compare against state of the art algorithms (, 2018; , 2018; , 2017). The results show that our models and algorithms have comparable or better accuracy than other algorithms while performing better in terms of fairness, as measured using statistical rate or false discovery rate.
The work of @cite_31 is perhaps the closest in terms of the techniques involved. They formulate their constrained optimization problem as an unconstrained one using Lagrangian transformation. This leads to a min-max optimization problem, which they then solve using the saddle point methods of @cite_9 @cite_17 . The key difference with respect to our work is that they do not aim to learn the sensitive attribute information from the classifier and instead just use the regularizer. Furthermore, their formulation does not support metrics like false discovery rate.
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_17" ], "mid": [ "2164571150", "2895628298", "2146989110", "2086953401" ], "abstract": [ "One central issue in practically deploying network coding is the adaptive and economic allocation of network resource. We cast this as an optimization, where the net-utility-the difference between a utility derived from the attainable multicast throughput and the total cost of resource provisioning-is maximized. By employing the MAX of flows characterization of the admissible rate region for multicasting, this paper gives a novel reformulation of the optimization problem, which has a separable structure. The Lagrangian relaxation method is applied to decompose the problem into subproblems involving one destination each. Our specific formulation of the primal problem results in two key properties. First, the resulting subproblem after decomposition amounts to the problem of finding a shortest path from the source to each destination. Second, assuming the net-utility function is strictly concave, our proposed method enables a near-optimal primal variable to be uniquely recovered from a near-optimal dual variable. A numerical robustness analysis of the primal recovery method is also conducted. For ill-conditioned problems that arise, for instance, when the cost functions are linear, we propose to use the proximal method, which solves a sequence of well-conditioned problems obtained from the original problem by adding quadratic regularization terms. Furthermore, the simulation results confirm the numerical robustness of the proposed algorithms. Finally, the proximal method and the dual subgradient method can be naturally extended to provide an effective solution for applications with multiple multicast sessions", "Min-max saddle-point problems have broad applications in many tasks in machine learning, e.g., distributionally robust learning, learning with non-decomposable loss, or learning with uncertain data. Although convex-concave saddle-point problems have been broadly studied with efficient algorithms and solid theories available, it remains a challenge to design provably efficient algorithms for non-convex saddle-point problems, especially when the objective function involves an expectation or a large-scale finite sum. Motivated by recent literature on non-convex non-smooth minimization, this paper studies a family of non-convex min-max problems where the minimization component is non-convex (weakly convex) and the maximization component is concave. We propose a proximally guided stochastic subgradient method and a proximally guided stochastic variance-reduced method for expected and finite-sum saddle-point problems, respectively. We establish the computation complexities of both methods for finding a nearly stationary point of the corresponding minimization problem.", "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.", "Minimization with orthogonality constraints (e.g., (X^ X = I )) and or spherical constraints (e.g., ( x _2 = 1 )) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842 to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 min on a typical laptop." ] }
1901.10423
2911776884
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
Segregation is a common behavior in nature, and it can be observed across scales. For example, cell segregation is a basic building block of embryogeneis in tissue generation processes @cite_3 @cite_0 ; while social insects, such as ants, organize their brood into ring-like structures @cite_14 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_3" ], "mid": [ "2123457819", "2103852413", "2147673836", "2953060309" ], "abstract": [ "There are several examples in natural systems that exhibit the self-organizing behavior of segregation when different types of units interact with each other. One of the best examples is a system of biological cells of heterogeneous types that has the ability to self-organize into specific formations, form different types of organs and, ultimately, develop into a living organism. Previous research in this area has indicated that such segregations in biological cells and tissues are made possible because of the differences in adhesivity between various types of cells or tissues. Inspired by this differential adhesivity model, this technical note presents a decentralized approach utilizing differential artificial potential to achieve the segregation behavior in a swarm of heterogeneous robotic agents. The method is based on the proposition that agents experience different magnitudes of potential while interacting with agents of different types. Stability analysis of the system with the proposed approach in the Lyapunov sense is carried out in this technical note. Extensive simulations and analytical investigations suggest that the proposed method would lead a population of two types of agents to a segregated configuration.", "The establishment and maintenance of precisely organized tissues requires the formation of sharp borders between distinct cell populations. The maintenance of segregated cell populations is also required for tissue homeostasis in the adult, and deficiencies in segregation underlie the metastatic spreading of tumor cells. Three classes of mechanisms that underlie cell segregation and border formation have been uncovered. The first involves differences in cadherin-mediated cell–cell adhesion that establishes interfacial tension at the border between distinct cell populations. A second mechanism involves the induction of actomyosin-mediated contraction by intercellular signaling, such that cortical tension is generated at the border. Third, activation of Eph receptors and ephrins can lead to both decreased adhesion by triggering cleavage of E-cadherin, and to repulsion of cells by regulation of the actin cytoskeleton, thus preventing intermingling between cell populations. These mechanisms play crucial roles at distinct boundaries during development, and alterations in cadherin or Eph ephrin expression have been implicated in tumor metastasis.", "Aggregation is widespread in invertebrate societies and can appear in response to environmental heterogeneities or by attraction between individuals. We performed experiments with cockroach, Blattella germanica, larvae in a homogeneous environment to investigate the influence of interactions between individuals on aggregations. Different densities were tested. A first phase led to radial dispersion of larvae in relation to wall-following behaviours; the consequence of this process was a homogeneous distribution of larvae around the periphery of the arena. A second phase corresponded to angular reorganization of larvae leading to the formation of aggregates. The phenomenon was analysed both at the individual and collective levels. Individual cockroaches modulated their behaviour depending on the presence of other larvae in their vicinity: probabilities of stopping and resting times were both higher when the numbers of larvae were greater. We then developed an agent-based model implementing individual behavioural rules, all derived from experiments, to explain the aggregation dynamics at the collective level. This study supports evidence that aggregation relies on mechanisms of amplification, supported by interactions between individuals that follow simple rules based on local information and without knowledge of the global structure.", "Schelling's model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In BK , Brandt, Immorlica, Kamath and Kleinberg provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model's behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an level of intolerance for neighbouring agents of opposite type leads almost certainly to segregation." ] }
1901.10423
2911776884
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
In robotics, segregation is a problem that has not received considerable attention. The main methods that have been proposed so far are based on some variation of the artificial potential approach @cite_17 , which assumes that the robots can detect each other and estimate relative distance vectors.
{ "cite_N": [ "@cite_17" ], "mid": [ "2160636171", "1686337294", "2168110171", "2170229019" ], "abstract": [ "We study a simple algorithm inspired by the Brazil nut effect for achieving segregation in a swarm of mobile robots. The algorithm lets each robot mimic a particle of a certain size and broadcast this information locally. The motion of each particle is controlled by three reactive behaviors: random walk, taxis, and repulsion by other particles. The segregation task requires the swarm to self-organize into a spatial arrangement in which the robots are ranked by particle size (e.g., annular structures or stripes).", "When a mixture of particles with different attributes undergoes vibration, a segregation pattern is often observed. For example, in muesli cereal packs, the largest particles---the Brazil nuts---tend to end up at the top. For this reason, the phenomenon is known as the Brazil nut effect. In previous research, an algorithm inspired by this effect was designed to produce segregation patterns in swarms of simulated agents that move on a horizontal plane. In this paper, we adapt this algorithm for implementation on robots with directional vision. We use the e-puck robot as a platform to test our implementation. In a swarm of e-pucks, different robots mimic disks of different sizes (larger than their physical dimensions). The motion of every robot is governed by a combination of three components: (i) attraction towards a point, which emulates the effect of a gravitational pull, (ii) random motion, which emulates the effect of vibration, and (iii) repulsion from nearby robots, which emulates the effect of collisions between disks. The algorithm does not require robots to discriminate between other robots; yet, it is capable of forming annular structures where the robots in each annulus represent disks of identical size. We report on a set of experiments performed with a group of 20 physical e-pucks. The results obtained in 100 trials of 20 minutes each show that the percentage of incorrectly-ordered pairs of disks from different groups decreases as the size ratio of disks in different groups is increased. In our experiments, this percentage was, on average, below 0.5 for size ratios from 3.0 to 5.0. Moreover, for these size ratios, all segregation errors observed were due to mechanical failures that caused robots to stop moving.", "This paper presents a new approach to the multi-robot map-alignment problem that enables teams of robots to build joint maps without initial knowledge of their relative poses. The key contribution of this work is an optimal algorithm for merging (not necessarily overlapping) maps that are created by different robots independently. Relative pose measurements between pairs of robots are processed to compute the coordinate transformation between any two maps. Noise in the robot-to-robot observations, propagated through the map-alignment process, increases the error in the position estimates of the transformed landmarks, and reduces the overall accuracy of the merged map. When there is overlap between the two maps, landmarks that appear twice provide additional information, in the form of constraints, which increases the alignment accuracy. Landmark duplicates are identified through a fast nearest-neighbor matching algorithm. In order to reduce the computational complexity of this search process, a kd-tree is used to represent the landmarks in the original map. The criterion employed for matching any two landmarks is the Mahalanobis distance. As a means of validation, we present experimental results obtained from two robots mapping an area of 4,800 m2", "This paper describes an on-line algorithm for multi-robot simultaneous localization and mapping (SLAM). The starting point is the single-robot Rao-Blackwellized particle filter described by , and three key generalizations are made. First, the particle filter is extended to handle multi-robot SLAM problems in which the initial pose of the robots is known (such as occurs when all robots start from the same location). Second, an approximation is introduced to solve the more general problem in which the initial pose of robots is not known a priori (such as occurs when the robots start from widely separated locations). In this latter case, it is assumed that pairs of robots will eventually encounter one another, thereby determining their relative pose. This relative attitude is used to initialize the filter, and subsequent observations from both robots are combined into a common map. Third and finally, a method is introduced to integrate observations collected prior to the first robot encounter, using the notion of a virtual robot travelling backwards in time. This novel approach allows one to integrate all data from all robots into a single common map." ] }
1901.10423
2911776884
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
Gro @cite_12 proposed an algorithm inspired by the Brazil Nut effect, in which the robots form regular layers simulating gravity by sharing a common direction. This study was later extended to work on e-pucks robots @cite_18 . To simulate gravity, this approach requires the robots to share a common target vector, which can be obtained through centralized controllers or a distributed consensus algorithm.
{ "cite_N": [ "@cite_18", "@cite_12" ], "mid": [ "1686337294", "2160636171", "1483270512", "1965243636" ], "abstract": [ "When a mixture of particles with different attributes undergoes vibration, a segregation pattern is often observed. For example, in muesli cereal packs, the largest particles---the Brazil nuts---tend to end up at the top. For this reason, the phenomenon is known as the Brazil nut effect. In previous research, an algorithm inspired by this effect was designed to produce segregation patterns in swarms of simulated agents that move on a horizontal plane. In this paper, we adapt this algorithm for implementation on robots with directional vision. We use the e-puck robot as a platform to test our implementation. In a swarm of e-pucks, different robots mimic disks of different sizes (larger than their physical dimensions). The motion of every robot is governed by a combination of three components: (i) attraction towards a point, which emulates the effect of a gravitational pull, (ii) random motion, which emulates the effect of vibration, and (iii) repulsion from nearby robots, which emulates the effect of collisions between disks. The algorithm does not require robots to discriminate between other robots; yet, it is capable of forming annular structures where the robots in each annulus represent disks of identical size. We report on a set of experiments performed with a group of 20 physical e-pucks. The results obtained in 100 trials of 20 minutes each show that the percentage of incorrectly-ordered pairs of disks from different groups decreases as the size ratio of disks in different groups is increased. In our experiments, this percentage was, on average, below 0.5 for size ratios from 3.0 to 5.0. Moreover, for these size ratios, all segregation errors observed were due to mechanical failures that caused robots to stop moving.", "We study a simple algorithm inspired by the Brazil nut effect for achieving segregation in a swarm of mobile robots. The algorithm lets each robot mimic a particle of a certain size and broadcast this information locally. The motion of each particle is controlled by three reactive behaviors: random walk, taxis, and repulsion by other particles. The segregation task requires the swarm to self-organize into a spatial arrangement in which the robots are ranked by particle size (e.g., annular structures or stripes).", "Consider a group of N robots aiming to converge towards a single point. The robots cannot communicate, and their only input is obtained by visual sensors. A natural algorithm for the problem is based on requiring each robot to move towards the robots’ center of gravity. The paper proves the correctness of the center-of-gravity algorithm in the semi-synchronous model for any number of robots, and its correctness in the fully asynchronous model for two robots.", "This paper presents a distributed algorithm whereby a group of mobile robots self-organize and position themselves into forming a circle in a loosely synchronized environment. In spite of its apparent simplicity, the difficulty of the problem comes from the weak assumptions made on the system. In particular, robots are anonymous, oblivious (i.e., stateless), unable to communicate directly, and disoriented in the sense that they share no knowledge of a common coordinate system. Furthermore, robots' activations are not synchronized. More specifically, the proposed algorithm ensures that robots deterministically form a non-uniform circle in a finite number of steps and converges to a situation in which all robots are located evenly on the boundary of the circle." ] }
1901.10423
2911776884
We present a decentralized algorithm to achieve segregation into an arbitrary number of groups with swarms of autonomous robots. The distinguishing feature of our approach is in the minimalistic assumptions on which it is based. Specifically, we assume that (i) Each robot is equipped with a ternary sensor capable of detecting the presence of a single nearby robot, and, if that robot is present, whether or not it belongs to the same group as the sensing robot; (ii) The robots move according to a differential drive model; and (iii) The structure of the control system is purely reactive, and it maps directly the sensor readings to the wheel speeds with a simple 'if' statement. We present a thorough analysis of the parameter space that enables this behavior to emerge, along with conditions for guaranteed convergence and a study of non-ideal aspects in the robot design.
Kumar @cite_6 introduced the concept of differential potential'', whereby two robots experience a different artificial potential depending on their being part of the same class or not. The convergence of this approach is guaranteed for two classes, but when more classes are employed local minima prevent segregation from emerging.
{ "cite_N": [ "@cite_6" ], "mid": [ "2170229019", "2142336599", "1686337294", "2161395589" ], "abstract": [ "This paper describes an on-line algorithm for multi-robot simultaneous localization and mapping (SLAM). The starting point is the single-robot Rao-Blackwellized particle filter described by , and three key generalizations are made. First, the particle filter is extended to handle multi-robot SLAM problems in which the initial pose of the robots is known (such as occurs when all robots start from the same location). Second, an approximation is introduced to solve the more general problem in which the initial pose of robots is not known a priori (such as occurs when the robots start from widely separated locations). In this latter case, it is assumed that pairs of robots will eventually encounter one another, thereby determining their relative pose. This relative attitude is used to initialize the filter, and subsequent observations from both robots are combined into a common map. Third and finally, a method is introduced to integrate observations collected prior to the first robot encounter, using the notion of a virtual robot travelling backwards in time. This novel approach allows one to integrate all data from all robots into a single common map.", "Markov Random Field is now ubiquitous in many formulations of various vision problems. Recently, optimization of higher-order potentials became practical using higher-order graph cuts: the combination of i) the fusion move algorithm, ii) the reduction of higher-order binary energy minimization to first-order, and iii) the QPBO algorithm. In the fusion move, it is crucial for the success and efficiency of the optimization to provide proposals that fits the energies being optimized. For higher-order energies, it is even more so because they have richer class of null potentials. In this paper, we focus on the efficiency of the higher-order graph cuts and present a simple technique for generating proposal labelings that makes the algorithm much more efficient, which we empirically show using examples in stereo and image denoising.", "When a mixture of particles with different attributes undergoes vibration, a segregation pattern is often observed. For example, in muesli cereal packs, the largest particles---the Brazil nuts---tend to end up at the top. For this reason, the phenomenon is known as the Brazil nut effect. In previous research, an algorithm inspired by this effect was designed to produce segregation patterns in swarms of simulated agents that move on a horizontal plane. In this paper, we adapt this algorithm for implementation on robots with directional vision. We use the e-puck robot as a platform to test our implementation. In a swarm of e-pucks, different robots mimic disks of different sizes (larger than their physical dimensions). The motion of every robot is governed by a combination of three components: (i) attraction towards a point, which emulates the effect of a gravitational pull, (ii) random motion, which emulates the effect of vibration, and (iii) repulsion from nearby robots, which emulates the effect of collisions between disks. The algorithm does not require robots to discriminate between other robots; yet, it is capable of forming annular structures where the robots in each annulus represent disks of identical size. We report on a set of experiments performed with a group of 20 physical e-pucks. The results obtained in 100 trials of 20 minutes each show that the percentage of incorrectly-ordered pairs of disks from different groups decreases as the size ratio of disks in different groups is increased. In our experiments, this percentage was, on average, below 0.5 for size ratios from 3.0 to 5.0. Moreover, for these size ratios, all segregation errors observed were due to mechanical failures that caused robots to stop moving.", "We provide a general approach for learning robotic motor skills from human demonstration. To represent an observed movement, a non-linear differential equation is learned such that it reproduces this movement. Based on this representation, we build a library of movements by labeling each recorded movement according to task and context (e.g., grasping, placing, and releasing). Our differential equation is formulated such that generalization can be achieved simply by adapting a start and a goal parameter in the equation to the desired position values of a movement. For object manipulation, we present how our framework extends to the control of gripper orientation and finger position. The feasibility of our approach is demonstrated in simulation as well as on the Sarcos dextrous robot arm. The robot learned a pick-and-place operation and a water-serving task and could generalize these tasks to novel situations." ] }
1901.10469
2770825526
Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.
Variable KD-Tree algorithms were used by the Pan-STARRS Moving Object Processing System @cite_0 , @cite_7 . These techniques were developed in collaboration with the incoming LSST (Large Synoptic Survey Telescope) @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_7" ], "mid": [ "2094665988", "2100548006", "1999668761", "2099253838" ], "abstract": [ "We present an algorithm that efficiently constructs a visibility map for a given view of a polygonal scene. The view is represented by a BSP tree and the visibility map is obtained by postprocessing of that tree. The scene is organised in a kD-tree that is used to perform an approximate occlusion sweep. The occlusion sweep is interleaved with hierarchical visibility tests what results in expected output sensitive behaviour of the algorithm. We evaluate our implementation of the method on several scenes and demonstrate its application to discontinuity meshing.", "An efficient implementation of Reid's multiple hypothesis tracking (MHT) algorithm is presented in which the k-best hypotheses are determined in polynomial time using an algorithm due to Murly (1968). The MHT algorithm is then applied to several motion sequences. The MHT capabilities of track initiation, termination, and continuation are demonstrated together with the latter's capability to provide low level support of temporary occlusion of tracks. Between 50 and 150 corner features are simultaneously tracked in the image plane over a sequence of up to 51 frames. Each corner is tracked using a simple linear Kalman filter and any data association uncertainty is resolved by the MHT. Kalman filter parameter estimation is discussed, and experimental results show that the algorithm is robust to errors in the motion model. An investigation of the performance of the algorithm as a function of look-ahead (tree depth) indicates that high accuracy can be obtained for tree depths as shallow as three. Experimental results suggest that a real-time MHT solution to the motion correspondence problem is possible for certain classes of scenes.", "Abstract : We present new algorithms for the k-means clustering problem. They use the kd-tree data structure to reduce the large number of nearest-neighbor queries issued by the traditional algorithm. Sufficient statistics are stored in the nodes of the kd-tree. Then an analysis of the geometry of the current cluster centers results in great reduction of the work needed to update the centers. Our algorithms behave exactly as the traditional k-means algorithm. Proofs of correctness are included. The kd-tree can also be used to initialize the k-means starting centers efficiently. Our algorithms can be easily extended to provide fast ways of computing the error of a given cluster assignment regardless of the method in which those clusters were obtained. We also show how to use them in a setting which allows approximate clustering results, with the benefit of running faster. We have implemented and tested our algorithms on both real and simulated data. Results show a speedup factor of up to 170 on real astrophysical data, and superiority over the naive algorithm on simulated data in up to 5 dimensions. Our algorithms scale well with respect to the number of points and number of centers allowing for clustering with tens of thousands of centers.", "In this paper, we look at improving the KD-tree for a specific usage: indexing a large number of SIFT and other types of image descriptors. We have extended priority search, to priority search among multiple trees. By creating multiple KD-trees from the same data set and simultaneously searching among these trees, we have improved the KD-treepsilas search performance significantly.We have also exploited the structure in SIFT descriptors (or structure in any data set) to reduce the time spent in backtracking. By using Principal Component Analysis to align the principal axes of the data with the coordinate axes, we have further increased the KD-treepsilas search performance." ] }
1901.10469
2770825526
Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.
Not only the major surveys developed automated system for moving object detection, but also some amateurs and small private surveys. A group of mainly amateurs search for NEAs in the TOTAS survey carried out with the ESA-OGS 1m telescope, lead by ESA @cite_4 . Another group from Argentina developed such a system based on the profile of the each light source represented by FWHM (Full Width at Half Maximum) @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_4" ], "mid": [ "2251089998", "2770825526", "2050563626", "2037538319" ], "abstract": [ "Abstract ESA's 1-m telescope on Tenerife, the Optical Ground Station (OGS), has been used for observing NEOs since 2009. Part of the observational activity is the demonstration and test of survey observation strategies. During the observations, a total of 11 near-Earth objects have been discovered in about 360 h of observing time from 2009 to 2014. The survey observations are performed by imaging the same area in the sky 3 or 4 times within a 15–20 min time interval. A software robot analyses the images, searching for moving objects. The survey strategies and related data processing algorithms are described in this paper.", "Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.", "With the deployment of large CCD mosaic cameras and their use in large-scale surveys to discover Solar system objects, there is a need for fast detection algorithms that can handle large data loads in a nearly automatic way. We present here an algorithm that we have developed. Our approach, by using two independent detection algorithms and combining the results, maintains high efficiency while producing low false-detection rates. These properties are crucial in order to reduce the operator time associated with searching these huge data sets. We have used this algorithm on two different mosaic data sets obtained using the CFH12K camera at the Canada–France–Hawaii Telescope (CFHT). Comparing the detection efficiency and false-detection rate of each individual algorithm with the combination of both, we show that our approach decreases the false detection rate by a factor of a few hundred to a thousand, while decreasing the ‘limiting magnitude’ (where the detection rate drops to 50 per cent) by only 0.1–0.3 mag. The limiting magnitude is similar to that of a human operator blinking the images. Our full pipeline also characterizes the magnitude efficiency of the entire system by implanting artificial objects in the data set. The detection portion of the package is publicly available.", "We have devised an automatic detection algorithm for unresolved moving objects, such as asteroids and comets. The algorithm uses many CCD images in order to detect very dark moving objects that are invisible on a single CCD image. We carried out a trial observation to investigate its usefulness, using a 35-cm telescope. By using the algorithm, we succeeded to detect asteroids down to about 21mag. This algorithm will contribute significantly to searches for near-Earth objects and to solar-system astronomy." ] }
1901.09960
2951458896
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
It is well-known that pre-training improves generalization when the dataset for the target task is extremely small. Prior work on transfer learning has analyzed the properties of this effect, such as when fine-tuning should stop @cite_4 and which layers should be fine-tuned @cite_27 . In a series of ablation studies, show that the benefits of pre-training are robust to significant variation in the dataset used for pre-training, including the removal of classes related to the target task. In our work, we observe similar robustness to change in the dataset used for pre-training.
{ "cite_N": [ "@cite_27", "@cite_4" ], "mid": [ "2798381792", "2799854879", "2526782364", "2510153535" ], "abstract": [ "Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification challenge. Central to the success of our approach is a training scheme that uses higher image resolution and deals with the long-tailed distribution of training data. Next, we study transfer learning via fine-tuning from large scale datasets to small scale, domain-specific FGVC datasets. We propose a measure to estimate domain similarity via Earth Mover's Distance and demonstrate that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Our proposed transfer learning outperforms ImageNet pre-training and obtains state-of-the-art results on multiple commonly used FGVC datasets.", "Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models. To the best of our knowledge this practice has not been studied within the context of generative deep networks. Therefore, we study domain adaptation applied to image generation with generative adversarial networks. We evaluate several aspects of domain adaptation, including the impact of target domain size, the relative distance between source and target domain, and the initialization of conditional GANs. Our results show that using knowledge from pretrained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when the target data is limited. We show that these conclusions can also be drawn for conditional GANs even when the pretrained model was trained without conditioning. Our results also suggest that density may be more important than diversity and a dataset with one or few densely sampled classes may be a better source model than more diverse datasets such as ImageNet or Places.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?" ] }
1901.09960
2951458896
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
show that networks overfit to the incorrect labels when trained for too long ( fig:trainforlonger ). This observation suggests pre-training as a potential fix, since one need only fine-tune for a short period to attain good performance. We show that pre-training not only improves performance with no label noise correction, but also complements methods proposed in prior work. Also note that most prior works @cite_32 @cite_3 @cite_10 only experiment with small-scale images since label corruption demonstrations can require training hundreds of models @cite_14 . Since pre-training is typically reserved for large-scale datasets, such works do not explore the impact of pre-training.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_32", "@cite_3" ], "mid": [ "2964155802", "2526782364", "2964096266", "2530816535" ], "abstract": [ "Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class contained within the set of known classes in the training data. However, such an assumption is too restrictive for many applications, since samples associated with noisy labels might in fact possess a true class that is not present in the training data. We refer to this more complex scenario as the open-set noisy label problem and show that it is nontrivial in order to make accurate predictions. To address this problem, we propose a novel iterative learning framework for training CNNs on datasets with open-set noisy labels. Our approach detects noisy labels and learns deep discriminative features in an iterative fashion. To benefit from the noisy label detection, we design a Siamese network to encourage clean labels and noisy labels to be dissimilar. A reweighting module is also applied to simultaneously emphasize the learning from clean labels and reduce the effect caused by noisy labels. Experiments on CIFAR-10, ImageNet and real-world noisy (web-search) datasets demonstrate that our proposed model can robustly train CNNs in the presence of a high proportion of open-set as well as closed-set noisy labels.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Collecting large training datasets, annotated with high-quality labels, is costly and time-consuming. This paper proposes a novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply. The problem is formulated using an undirected graphical model that represents the relationship between noisy and clean labels, trained in a semi-supervised setting. In our formulation, the inference over latent clean labels is tractable and is regularized during training using auxiliary sources of information. The proposed model is applied to the image labeling problem and is shown to be effective in labeling unseen images as well as reducing label noise in training on CIFAR-10 and MS COCO datasets.", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels." ] }
1901.09960
2951458896
Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30 relative improvement in label noise robustness and a 10 absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
The susceptibility of neural networks to small, adversarially chosen input perturbations has received much attention. Over the years, many methods have been proposed as defenses against adversarial examples @cite_24 @cite_12 , but these are often circumvented in short order @cite_18 . In fact, the only defense widely regarded as having stood the test of time is the adversarial training procedure of . In this algorithm, white-box adversarial examples are created at each step of training and substituted in place of normal examples. This does provide some amount of adversarial robustness, but it requires substantially longer training times. In a later work, argue further progress on this problem may require significantly more task-specific data. However, given that data from a different distribution can be beneficial for a given task @cite_21 , it is conceivable that the need for task-specific data could be obviated with pre-training.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_21", "@cite_12" ], "mid": [ "2767962654", "2783555701", "2795995837", "2964077693" ], "abstract": [ "Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at this https URL.", "Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76 accuracy on a public MNIST black-box attack challenge.", "Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .", "Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 ." ] }
1901.10159
2950990113
To understand the dynamics of optimization in deep neural networks, we develop a tool to study the evolution of the entire Hessian spectrum throughout the optimization process. Using this, we study a number of hypotheses concerning smoothness, curvature, and sharpness in the deep learning literature. We then thoroughly analyze a crucial structural feature of the spectra: in non-batch normalized networks, we observe the rapid appearance of large isolated eigenvalues in the spectrum, along with a surprising concentration of the gradient in the corresponding eigenspaces. In batch normalized networks, these two effects are almost absent. We characterize these effects, and explain how they affect optimization speed through both theory and experiments. As part of this work, we adapt advanced tools from numerical linear algebra that allow scalable and accurate estimation of the entire Hessian spectrum of ImageNet-scale neural networks; this technique may be of independent interest in other applications.
During the preparation of this paper, @cite_29 appeared on Arxiv which briefly introduces the same spectrum estimation methodology and studies the Hessian on small subsamples of MNIST and CIFAR-10 at the end of the training. In comparison, we provide a detailed exposition, error analysis and validation of the estimator in Section , and present optimization results on full datasets, up to and including ImageNet.
{ "cite_N": [ "@cite_29" ], "mid": [ "2963373786", "2801243570", "2950769435", "2776855315" ], "abstract": [ "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "Despite breakthroughs in image classification due to the evolution of deep learning and, in particular, convolutional neural networks (CNNs), state-of-the-art models only possess a very limited amount of rotational invariance. Known workarounds include artificial rotations of the training data or ensemble approaches, where several models are evaluated. These approaches either increase the workload of the training or inflate the number of parameters. Further approaches add rotational invariance by globally pooling over rotationally equivariant features. Instead, we propose to incorporate rotational invariance into the feature-extraction part of the CNN directly. This allows to train on unrotated data and perform well on a rotated test set. We use rotational convolutions and introduce a rotational pooling layer that performs a pooling over the back-rotated output feature maps. We show that when training on the original, unrotated MNIST training dataset, but evaluating on rotations of the MNIST test dataset, the error rate can be reduced substantially from 58.20 to 12.20 . Similar results are shown for the CIFAR-10 and CIFAR-100 datasets.", "We simulate the training of a set of state of the art neural networks, the Maxout networks (, 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct arithmetics: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those arithmetics, we assess the impact of the precision of the computations on the final error of the training. We find that very low precision computation is sufficient not just for running trained networks but also for training them. For example, almost state-of-the-art results were obtained on most datasets with 10 bits for computing activations and gradients, and 12 bits for storing updated parameters.", "Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks." ] }
1901.10172
2913075819
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
has been widely studied in recent years. As a general and important computer vision task, it composes far-reaching applications such as clothing retrieval @cite_3 , recognition @cite_7 , fashion landmark detection @cite_12 , and clothing recommendation @cite_4 . To solve this task, early methods @cite_2 heavily rely on hand-crafted features, while recent methods mainly focus on exploiting the power of the deep neural network and have reported record-breaking results. We hereby outline some representative mile-stones for reference.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_2", "@cite_12" ], "mid": [ "2798734012", "2953391683", "2509155366", "1929903369" ], "abstract": [ "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network.", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "Automatically detecting illustrations is needed for the target system.Deep Convolutional Neural Networks have been successful in computer vision tasks.DCNN with fine-tuning outperformed the other models including handcrafted features. Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58 . On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8 accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected." ] }
1901.10172
2913075819
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
In @cite_18 , the authors leveraged a dual attribute aware mechanism for clothing retrieval. Differently, Liu @cite_7 presented a multi-branch network for clothing classification, retrieval, and landmark detection. @cite_5 utilized a model to precisely localize attribute for fashion search. More recently, Wang @cite_14 proposed a compact network for landmark detection and clothing classification. Although we are similarly inspired, yet @cite_14 treated the landmark detection as a middle-level individual task rather than a component within a multi-task formulation. Meanwhile, this work also neglected helpful information sharing between different tasks, so that the network is more heavy with excessive parameters than ours. On the contrary, our model integrates two parameter-free approaches to accomplish information sharing among different tasks. Besides, we also manage to reduce the computational cost and enhance the adaptibility of our model.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_14", "@cite_7" ], "mid": [ "2313077179", "2170881581", "2798734012", "2950940417" ], "abstract": [ "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1] . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2] , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.", "We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two subnetworks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268).", "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network.", "We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268)." ] }
1901.10172
2913075819
In this paper, we present a two-stream multi-task network for fashion recognition. This task is challenging as fashion clothing always contain multiple attributes, which need to be predicted simultaneously for real-time industrial systems. To handle these challenges, we formulate fashion recognition into a multi-task learning problem, including landmark detection, category and attribute classifications, and solve it with the proposed deep convolutional neural network. We design two knowledge sharing strategies which enable information transfer between tasks and improve the overall performance. The proposed model achieves state-of-the-art results on large-scale fashion dataset comparing to the existing methods, which demonstrates its great effectiveness and superiority for fashion recognition.
has shown promising results in many applications. A comprehensive survey can be found in @cite_17 . In consideration of brevity, we hereby only introduce related MTL literature focusing on computer vision tasks. @cite_0 introduced a multi-task deep convolution neural network to jointly achieve body-part and joint-point detections. In @cite_1 , a multi-linear multi-task method is proposed for person-specific facial action unit prediction. @cite_9 proposed a multi-task CNN for images based multi-label attribute prediction. More interestingly, @cite_6 presented a recurrent based framework to jointly estimate the interaction, distance, stand orientation, relative orientation, and pose estimation for immediacy prediction. Since MTL has proven its efficacy in many tasks, in this work, we are thus motivated to introduce this powerful technique into fashion recognition and implement it with the proposed two-stream multi-task network.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_17" ], "mid": [ "2588595876", "2608075338", "2952198537", "2410641892" ], "abstract": [ "This paper explores multi-task learning (MTL) for face recognition. First, we propose a multi-task convolutional neural network (CNN) for face recognition, where identity classification is the main task and pose, illumination, and expression (PIE) estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weights to each side task, which solves the crucial problem of balancing between different tasks in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses in a joint framework. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the PIE variations from the learnt identity features. Extensive experiments on the entire multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in multi-PIE for face recognition. Our approach is also applicable to in-the-wild data sets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.", "We propose a novel convolutional neural network approach to address the fine-grained recognition problem of multi-view dynamic facial action unit detection. We leverage recent gains in large-scale object recognition by formulating the task of predicting the presence or absence of a specific action unit in a still image of a human face as holistic classification. We then explore the design space of our approach by considering both shared and independent representations for separate action units, and also different CNN architectures for combining color and motion information. We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint. Our approach is holistic, efficient, and modular, since new action units can be easily included in the overall system. Our approach significantly outperforms the baseline of the FERA 2017 Challenge, with an absolute improvement of 14 on the F1-metric. Additionally, it compares favorably against the winner of the FERA 2017 challenge. Code source is available at this https URL.", "We present a multi-purpose algorithm for simultaneous face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and face recognition using a single deep convolutional neural network (CNN). The proposed method employs a multi-task learning framework that regularizes the shared parameters of CNN and builds a synergy among different domains and tasks. Extensive experiments show that the network has a better understanding of face and achieves state-of-the-art result for most of these tasks.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets." ] }
1901.10139
2913967867
It is an easy task for humans to learn and generalize a problem, perhaps it is due to their ability to visualize and imagine unseen objects and concepts. The power of imagination comes handy especially when interpolating learnt experience (like seen examples) over new classes of a problem. For a machine learning system, acquiring such powers of imagination are still a hard task. We present a novel approach to low-shot learning that uses the idea of imagination over unseen classes in a classification problem setting. We combine a classifier with a visionary' (i.e., a GAN model) that teaches the classifier to generalize itself over new and unseen classes. This approach can be incorporated into a variety of problem settings where we need a classifier to learn and generalize itself to new and unseen classes. We compare the performance of classifiers with and without the visionary GAN model helping them.
The goal of few shot learning is to learn a representation which can generalize across classes and deal with even unseen examples from new classes @cite_4 . The few shot problem specifically has been studied from multiple perspectives, including the optimization @cite_18 , metric learning @cite_19 , similarity-matching @cite_17 , hierarchical graphical models @cite_11 , etc. We deal with zero shot learning by generating examples from the unseen classes thus helping the network to gain atleast some intuition for them and generalize itself better. Our work primarily deals with transformational generative learning where we learn the transformations required to generate new instances from previously unseen classes @cite_13 @cite_26 @cite_1 @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_21", "@cite_1", "@cite_17", "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2879454547", "2963960318", "2771119646", "2963220594" ], "abstract": [ "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings.", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets -- CUB, FLO, SUN, AWA and ImageNet -- in both the zero-shot learning and generalized zero-shot learning settings.", "Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot, and few-shot learning problems. Our approach is based on a novel class adapting principal directions’ (CAPDs) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for the selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen–unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot, and few one-shot learning problems." ] }
1901.10139
2913967867
It is an easy task for humans to learn and generalize a problem, perhaps it is due to their ability to visualize and imagine unseen objects and concepts. The power of imagination comes handy especially when interpolating learnt experience (like seen examples) over new classes of a problem. For a machine learning system, acquiring such powers of imagination are still a hard task. We present a novel approach to low-shot learning that uses the idea of imagination over unseen classes in a classification problem setting. We combine a classifier with a visionary' (i.e., a GAN model) that teaches the classifier to generalize itself over new and unseen classes. This approach can be incorporated into a variety of problem settings where we need a classifier to learn and generalize itself to new and unseen classes. We compare the performance of classifiers with and without the visionary GAN model helping them.
The goal of few shot learning is to learn a representation which can generalize across classes and deal with even unseen examples from new classes @cite_4 . The few shot problem specifically has been studied from multiple perspectives, including optimization @cite_18 , metric learning @cite_19 , similarity-matching @cite_17 , hierarchical graphical models @cite_11 , etc. We deal with zero shot learning by generating examples from the unseen classes thus helping the network to gain some intuition for them and generalize itself better. Our work primarily deals with transformational generative learning where we learn the transformations required to generate new instances from previously unseen classes and then use them in classifier models @cite_13 @cite_26 @cite_1 @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_21", "@cite_1", "@cite_17", "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2963960318", "2771119646", "2771620762", "2799215068" ], "abstract": [ "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings.", "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets -- CUB, FLO, SUN, AWA and ImageNet -- in both the zero-shot learning and generalized zero-shot learning settings.", "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g.Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning." ] }
1901.10173
2912152488
Recent studies have shown that imbalance ratio is not the only cause of the performance loss of a classifier in imbalanced data classification. In fact, other data factors, such as small disjuncts, noises and overlapping, also play the roles in tandem with imbalance ratio, which makes the problem difficult. Thus far, the empirical studies have demonstrated the relationship between the imbalance ratio and other data factors only. To the best of our knowledge, there is no any measurement about the extent of influence of class imbalance on the classification performance of imbalanced data. Further, it is also unknown for a dataset which data factor is actually the main barrier for classification. In this paper, we focus on Bayes optimal classifier and study the influence of class imbalance from a theoretical perspective. Accordingly, we propose an instance measure called Individual Bayes Imbalance Impact Index ( @math ) and a data measure called Bayes Imbalance Impact Index ( @math ). @math and @math reflect the extent of influence purely by the factor of imbalance in terms of each minority class sample and the whole dataset, respectively. Therefore, @math can be used as an instance complexity measure of imbalance and @math is a criterion to show the degree of how imbalance deteriorates the classification. As a result, we can therefore use @math to judge whether it is worth using imbalance recovery methods like sampling or cost-sensitive methods to recover the performance loss of a classifier. The experiments show that @math is highly consistent with the increase of prediction score made by the imbalance recovery methods and @math is highly consistent with the improvement of F1 score made by the imbalance recovery methods on both synthetic and real benchmark datasets.
The second data factor is noise. Noisy samples are usually defined as the ones from one class located deep into the other class @cite_17 . The existence of noise samples in the minority class will make blind oversampling methods like SMOTE generate more noises, so that applying oversampling on the noisy the minority class may even degrade the performance @cite_2 . Therefore, data cleaning methods are usually adopted to tackle the noises such as Tomek link @cite_9 and ENN @cite_13 . Another straightforward method to find noise is to collect the samples which are wrongly classified by @math NN classifier @cite_0 . Van Hulse and Khoshgoftaar experimented on data with artificial noises @cite_34 , where the class noise is injected to real datasets by randomly relabelling the samples before training. The results show that the minority class is severely effected by noises with all compared classifiers.
{ "cite_N": [ "@cite_9", "@cite_0", "@cite_2", "@cite_34", "@cite_13", "@cite_17" ], "mid": [ "1595276678", "1993220166", "2087240369", "1991181258" ], "abstract": [ "Data mining and knowledge discovery aim at producing useful and reliable models from the data. Unfortunately some databases contain noisy data which perturb the generalization of the models. An important source of noise consists of mislabelled training instances. We offer a new approach which deals with improving classification accuracies by using a preliminary filtering procedure. An example is suspect when in its neighbourhood defined by a geometrical graph the proportion of examples of the same class is not significantly greater than in the database itself. Such suspect examples in the training data can be removed or relabelled. The filtered training set is then provided as input to learning algorithms. Our experiments on ten benchmarks of UCI Machine Learning Repository using 1-NN as the final algorithm show that removal gives better results than relabelling. Removing allows maintaining the generalization error rate when we introduce from 0 to 20 of noise on the class, especially when classes are well separable. The filtering method proposed is finally compared to the relaxation relabelling schema.", "There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.", "Imbalanced learning problems contain an unequal distribution of data samples among different classes and pose a challenge to any classifier as it becomes hard to learn the minority class samples. Synthetic oversampling methods address this problem by generating the synthetic minority class samples to balance the distribution between the samples of the majority and minority classes. This paper identifies that most of the existing oversampling methods may generate the wrong synthetic minority samples in some scenarios and make learning tasks harder. To this end, a new method, called Majority Weighted Minority Oversampling TEchnique (MWMOTE), is presented for efficiently handling imbalanced learning problems. MWMOTE first identifies the hard-to-learn informative minority class samples and assigns them weights according to their euclidean distance from the nearest majority class samples. It then generates the synthetic samples from the weighted informative minority class samples using a clustering approach. This is done in such a way that all the generated samples lie inside some minority class cluster. MWMOTE has been evaluated extensively on four artificial and 20 real-world data sets. The simulation results show that our method is better than or comparable with some other existing methods in terms of various assessment metrics, such as geometric mean (G-mean) and area under the receiver operating curve (ROC), usually known as area under curve (AUC).", "Classification using class-imbalanced data is biased in favor of the majority class. The bias is even larger for high-dimensional data, where the number of variables greatly exceeds the number of samples. The problem can be attenuated by undersampling or oversampling, which produce class-balanced data. Generally undersampling is helpful, while random oversampling is not. Synthetic Minority Oversampling TEchnique (SMOTE) is a very popular oversampling method that was proposed to improve random oversampling but its behavior on high-dimensional data has not been thoroughly investigated. In this paper we investigate the properties of SMOTE from a theoretical and empirical point of view, using simulated and real high-dimensional data. While in most cases SMOTE seems beneficial with low-dimensional data, it does not attenuate the bias towards the classification in the majority class for most classifiers when data are high-dimensional, and it is less effective than random undersampling. SMOTE is beneficial for k-NN classifiers for high-dimensional data if the number of variables is reduced performing some type of variable selection; we explain why, otherwise, the k-NN classification is biased towards the minority class. Furthermore, we show that on high-dimensional data SMOTE does not change the class-specific mean values while it decreases the data variability and it introduces correlation between samples. We explain how our findings impact the class-prediction for high-dimensional data. In practice, in the high-dimensional setting only k-NN classifiers based on the Euclidean distance seem to benefit substantially from the use of SMOTE, provided that variable selection is performed before using SMOTE; the benefit is larger if more neighbors are used. SMOTE for k-NN without variable selection should not be used, because it strongly biases the classification towards the minority class." ] }
1901.10173
2912152488
Recent studies have shown that imbalance ratio is not the only cause of the performance loss of a classifier in imbalanced data classification. In fact, other data factors, such as small disjuncts, noises and overlapping, also play the roles in tandem with imbalance ratio, which makes the problem difficult. Thus far, the empirical studies have demonstrated the relationship between the imbalance ratio and other data factors only. To the best of our knowledge, there is no any measurement about the extent of influence of class imbalance on the classification performance of imbalanced data. Further, it is also unknown for a dataset which data factor is actually the main barrier for classification. In this paper, we focus on Bayes optimal classifier and study the influence of class imbalance from a theoretical perspective. Accordingly, we propose an instance measure called Individual Bayes Imbalance Impact Index ( @math ) and a data measure called Bayes Imbalance Impact Index ( @math ). @math and @math reflect the extent of influence purely by the factor of imbalance in terms of each minority class sample and the whole dataset, respectively. Therefore, @math can be used as an instance complexity measure of imbalance and @math is a criterion to show the degree of how imbalance deteriorates the classification. As a result, we can therefore use @math to judge whether it is worth using imbalance recovery methods like sampling or cost-sensitive methods to recover the performance loss of a classifier. The experiments show that @math is highly consistent with the increase of prediction score made by the imbalance recovery methods and @math is highly consistent with the improvement of F1 score made by the imbalance recovery methods on both synthetic and real benchmark datasets.
Before we close this section, we would like to point out that another somewhat related area is data complexity. A list of complexity measures are proposed in @cite_28 with different featured groups. The measures are used to study the essential structure of data and guide classifier selection for specific problems. Recently, Smith et. al @cite_16 have extented the data complexity from data level to instance level. They proposed a group of complexity measures that can be calculated for each instance. The correlation among those measures are then analyzed. The instance level complexity measures can be used for data cleaning that filters the most difficult samples in the data. However, there is no specific research on the data complexity for imbalanced data, and the existing complexity measures are not suitable to describe in what extent that the data is influenced by imbalance.
{ "cite_N": [ "@cite_28", "@cite_16" ], "mid": [ "2022477494", "2006210519", "2124710650", "2091007025" ], "abstract": [ "Most data complexity studies have focused on characterizing the complexity of the entire data set and do not provide information about individual instances. Knowing which instances are misclassified and understanding why they are misclassified and how they contribute to data set complexity can improve the learning process and could guide the future development of learning algorithms and data analysis methods. The goal of this paper is to better understand the data used in machine learning problems by identifying and analyzing the instances that are frequently misclassified by learning algorithms that have shown utility to date and are commonly used in practice. We identify instances that are hard to classify correctly (instance hardness) by classifying over 190,000 instances from 64 data sets with 9 learning algorithms. We then use a set of hardness measures to understand why some instances are harder to classify correctly than others. We find that class overlap is a principal contributor to instance hardness. We seek to integrate this information into the training process to alleviate the effects of class overlap and present ways that instance hardness can be used to improve learning.", "Real-world datasets commonly have issues with data imbalance. There are several approaches such as weighting, sub-sampling, and data modeling for handling these data. Learning in the presence of data imbalances presents a great challenge to machine learning. Techniques such as support-vector machines have excellent performance for balanced data, but may fail when applied to imbalanced datasets. In this paper, we propose a new undersampling technique for selecting instances from the majority class. The performance of this approach was evaluated in the context of several real biological imbalanced data. The ratios of negative to positive samples vary from 9:1 to 100:1. Useful classifiers have high sensitivity and specificity. Our results demonstrate that the proposed selection technique improves the sensitivity compared to weighted support-vector machine and available results in the literature for the same datasets.", "Classification with imbalanced datasets supposes a new challenge for researches in the framework of machine learning. This problem appears when the number of patterns that represents one of the classes of the dataset (usually the concept of interest) is much lower than in the remaining classes. Thus, the learning model must be adapted to this situation, which is very common in real applications. In this paper, a dynamic over-sampling procedure is proposed for improving the classification of imbalanced datasets with more than two classes. This procedure is incorporated into a memetic algorithm (MA) that optimizes radial basis functions neural networks (RBFNNs). To handle class imbalance, the training data are resampled in two stages. In the first stage, an over-sampling procedure is applied to the minority class to balance in part the size of the classes. Then, the MA is run and the data are over-sampled in different generations of the evolution, generating new patterns of the minimum sensitivity class (the class with the worst accuracy for the best RBFNN of the population). The methodology proposed is tested using 13 imbalanced benchmark classification datasets from well-known machine learning problems and one complex problem of microbial growth. It is compared to other neural network methods specifically designed for handling imbalanced data. These methods include different over-sampling procedures in the preprocessing stage, a threshold-moving method where the output threshold is moved toward inexpensive classes and ensembles approaches combining the models obtained with these techniques. The results show that our proposal is able to improve the sensitivity in the generalization set and obtains both a high accuracy level and a good classification level for each class.", "The class imbalance problems have been reported to severely hinder classification performance of many standard learning algorithms, and have attracted a great deal of attention from researchers of different fields. Therefore, a number of methods, such as sampling methods, cost-sensitive learning methods, and bagging and boosting based ensemble methods, have been proposed to solve these problems. However, these conventional class imbalance handling methods might suffer from the loss of potentially useful information, unexpected mistakes or increasing the likelihood of overfitting because they may alter the original data distribution. Thus we propose a novel ensemble method, which firstly converts an imbalanced data set into multiple balanced ones and then builds a number of classifiers on these multiple data with a specific classification algorithm. Finally, the classification results of these classifiers for new data are combined by a specific ensemble rule. In the empirical study, different class imbalance data handling methods including three conventional sampling methods, one cost-sensitive learning method, six Bagging and Boosting based ensemble methods, our previous method EM1vs1 and two fuzzy-rule based classification methods were compared with our method. The experimental results on 46 imbalanced data sets show that our proposed method is usually superior to the conventional imbalance data handling methods when solving the highly imbalanced problems. HighlightsWe propose a novel ensemble method to handle imbalanced binary data.The method turns imbalanced data learning into multiple balanced data learning.Our method usually performs better than the conventional methods on imbalanced data." ] }
1901.09955
2913996813
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
The problem of characterizing graphs with crossing number at least two was already studied by Arroyo and Richter @cite_0 in the context of .
{ "cite_N": [ "@cite_0" ], "mid": [ "2028446993", "2161036821", "2507315201", "2557594551" ], "abstract": [ "It was proved by [M.R. Garey, D.S. Johnson, Crossing number is NP-complete, SIAM J. Algebraic Discrete Methods 4 (1983) 312-316] that computing the crossing number of a graph is an NP-hard problem. Their reduction, however, used parallel edges and vertices of very high degrees. We prove here that it is NP-hard to determine the crossing number of a simple 3-connected cubic graph. In particular, this implies that the minor-monotone version of the crossing number problem is also NP-hard, which has been open till now.", "In this paper we study the problem of computing subgraphs of a certain configuration in a given topological graph G such that the number of crossings in the subgraph is minimum. The configurations that we consider are spanning trees, s-t paths, cycles, matchings, and @k-factors for @[email protected]? 1,2 . We show that it is NP-hard to approximate the minimum number of crossings for these configurations within a factor of k^1^-^@e for any @e>0, where k is the number of crossings in G. We then give a simple fixed-parameter algorithm that tests in O^@?(2^k) time whether G has a crossing-free configuration for any of the above, where the O^@?-notation neglects polynomial terms. For some configurations we have faster algorithms. The respective running times are O^@?(1.9999992^k) for spanning trees and O^@?((3)^k) for s-t paths and cycles. For spanning trees we also have an O^@?(1.968^k)-time Monte-Carlo algorithm. Each O^@?(@b^k)-time decision algorithm can be turned into an O^@?((@b+1)^k)-time optimization algorithm that computes a configuration with the minimum number of crossings.", "It is very well-known that there are precisely two minimal non-planar graphs: K 5 and K 3 , 3 (degree 2 vertices being irrelevant in this context). In the language of crossing numbers, these are the only 1-crossing-critical graphs: they each have crossing number at least one, and every proper subgraph has crossing number less than one. In 1987, Kochol exhibited an infinite family of 3-connected, simple, 2-crossing-critical graphs. In this work, we: (i) determine all the 3-connected 2-crossing-critical graphs that contain a subdivision of the Mobius Ladder V 10 ; (ii) show how to obtain all the not 3-connected 2-crossing-critical graphs from the 3-connected ones; (iii) show that there are only finitely many 3-connected 2-crossing-critical graphs not containing a subdivision of V 10 ; and (iv) determine all the 3-connected 2-crossing-critical graphs that do not contain a subdivision of V 8 .", "Our main result includes the following, slightly surprising, fact: a 4-connected nonplanar graph G has crossing number at least 2 if and only if, for every pair e,f of edges having no common incident vertex, there are vertex-disjoint cycles in G with one containing e and the other containing f." ] }
1901.09955
2913996813
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
Akka, Jendrol, Kle s c , and Panshetty @cite_15 obtained a characterization of planar graphs whose line graph has crossing number two.
{ "cite_N": [ "@cite_15" ], "mid": [ "2507315201", "2049597710", "2090034967", "2126687412" ], "abstract": [ "It is very well-known that there are precisely two minimal non-planar graphs: K 5 and K 3 , 3 (degree 2 vertices being irrelevant in this context). In the language of crossing numbers, these are the only 1-crossing-critical graphs: they each have crossing number at least one, and every proper subgraph has crossing number less than one. In 1987, Kochol exhibited an infinite family of 3-connected, simple, 2-crossing-critical graphs. In this work, we: (i) determine all the 3-connected 2-crossing-critical graphs that contain a subdivision of the Mobius Ladder V 10 ; (ii) show how to obtain all the not 3-connected 2-crossing-critical graphs from the 3-connected ones; (iii) show that there are only finitely many 3-connected 2-crossing-critical graphs not containing a subdivision of V 10 ; and (iv) determine all the 3-connected 2-crossing-critical graphs that do not contain a subdivision of V 8 .", "In this paper we deduce a necessary and sufficient condition for a line grah to have crossing number 1. In addition, we prove that the line graph of any nonplanar graph has crossing number greater than 2.", "It is proved that every cubic graph with crossing number at least two contains a subdivision of one of eight graphs.", "We prove that, for every positive integer k, there is an integer N such that every 4-connected non-planar graph with at least N vertices has a minor isomorphic to K\"4\",\"k, the graph obtained from a cycle of length 2k+1 by adding an edge joining every pair of vertices at distance exactly k, or the graph obtained from a cycle of length k by adding two vertices adjacent to each other and to every vertex on the cycle. We also prove a version of this for subdivisions rather than minors, and relax the connectivity to allow 3-cuts with one side planar and of bounded size. We deduce that for every integer k there are only finitely many 3-connected 2-crossing-critical graphs with no subdivision isomorphic to the graph obtained from a cycle of length 2k by joining all pairs of diagonally opposite vertices." ] }
1901.09955
2913996813
The crossing number of a graph @math is the least number of crossings over all possible drawings of @math . We present a structural characterization of graphs with crossing number one.
A great deal of attention has been given to 2-crossing-critical graphs @cite_3 @cite_11 @cite_13 @cite_8 @cite_6 @cite_4 @cite_12 . For a positive integer @math , the @math on @math vertices, is the graph obtained from a @math -cycle by joining vertices with distance @math in the cycle. Bokal, Opporowski, Richter and Salazar @cite_12 characterized all 3-connected 2-crossing-critical graphs that contains a @math as a minor and all the ones not containing a @math as a minor. They also showed how to obtain all the not 3-connected 2-crossing-critical graphs from the 3-connected ones, and showed that there exists only finitely many 3-connected 2-crossing-critical graphs with no @math minor. It remains to characterize or enumerate all the 3-connected 2-crossing-critical graphs with a @math but no @math as a minor. We hope this work can help determine these remaining @math -crossing-critical graphs.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_3", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2507315201", "2126687412", "2112497773", "2056837081" ], "abstract": [ "It is very well-known that there are precisely two minimal non-planar graphs: K 5 and K 3 , 3 (degree 2 vertices being irrelevant in this context). In the language of crossing numbers, these are the only 1-crossing-critical graphs: they each have crossing number at least one, and every proper subgraph has crossing number less than one. In 1987, Kochol exhibited an infinite family of 3-connected, simple, 2-crossing-critical graphs. In this work, we: (i) determine all the 3-connected 2-crossing-critical graphs that contain a subdivision of the Mobius Ladder V 10 ; (ii) show how to obtain all the not 3-connected 2-crossing-critical graphs from the 3-connected ones; (iii) show that there are only finitely many 3-connected 2-crossing-critical graphs not containing a subdivision of V 10 ; and (iv) determine all the 3-connected 2-crossing-critical graphs that do not contain a subdivision of V 8 .", "We prove that, for every positive integer k, there is an integer N such that every 4-connected non-planar graph with at least N vertices has a minor isomorphic to K\"4\",\"k, the graph obtained from a cycle of length 2k+1 by adding an edge joining every pair of vertices at distance exactly k, or the graph obtained from a cycle of length k by adding two vertices adjacent to each other and to every vertex on the cycle. We also prove a version of this for subdivisions rather than minors, and relax the connectivity to allow 3-cuts with one side planar and of bounded size. We deduce that for every integer k there are only finitely many 3-connected 2-crossing-critical graphs with no subdivision isomorphic to the graph obtained from a cycle of length 2k by joining all pairs of diagonally opposite vertices.", "A graph is crossing-critical if deleting any edge decreases its crossing number on the plane. It is proved that, for any n >= 3, there is an infinite family of 3-connected crossing-critical graphs with crossing number n.", "Abstract A graph is crossing-critical if deleting any edge decreases its crossing number on the plane. For any n ⩾ 2 we present a construction of an infinite family of 3-connected crossing-critical graphs with crossing number n ." ] }
1901.10109
2913416354
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
Keyword-based approaches @cite_39 @cite_24 @cite_15 @cite_23 @cite_28 @cite_35 @cite_10 @cite_33 typically define to retrieve @math elements with the highest scores as the results where the scoring functions combine the to query keywords (measured by TF-IDF or BM25) with other contexts such as @cite_15 @cite_28 @cite_35 @cite_23 , @cite_39 @cite_23 , and @cite_10 . They also design different indices to support instant updates and efficient top- @math query processing. However, keyword queries are substantially different from the query and thus keyword-based methods cannot be trivially adapted to process queries based on topic modeling.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_28", "@cite_39", "@cite_24", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "1965667542", "2141957180", "2014318353", "2233653089" ], "abstract": [ "A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(rvt ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by r) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.", "Given a set @math of @math strings of total length @math , our task is to report the \"most relevant\"strings for a given query pattern @math . This involves somewhat more advanced query functionality than the usual pattern matching, as some notion of \"most relevant\" is involved. In information retrieval literature, this task is best achieved by using inverted indexes. However, inverted indexes work only for some predefined set of patterns. In the pattern matching community, the most popular pattern-matching data structures are suffix trees and suffix arrays. However, a typical suffix tree search involves going through all the occurrences of the pattern over the entire string collection, which might be a lot more than the required relevant documents. The first formal framework to study such kind of retrieval problems was given by [Muthukrishnan, 2002]. He considered two metrics for relevance: frequency and proximity. He took a threshold-based approach on these metrics and gave data structures taking @math words of space. We study this problem in a slightly different framework of reporting the top @math most relevant documents (in sorted order) under similar and more general relevance metrics. Our framework gives linear space data structure with optimal query times for arbitrary score functions. As a corollary, it improves the space utilization for the problems in [Muthukrishnan, 2002] while maintaining optimal query performance. We also develop compressed variants of these data structures for several specific relevance metrics.", "We propose succinct data structures for text retrieval systems supporting document listing queries and ranking queries based on the tf*idf (term frequency times inverse document frequency) scores of documents. Traditional data structures for these problems support queries only for some predetermined keywords. Recently Muthukrishnan proposed a data structure for document listing queries for arbitrary patterns at the cost of data structure size. For computing the tf*idf scores there has been no efficient data structures for arbitrary patterns. Our new data structures support these queries using small space. The space is only 2 @e times the size of compressed documents plus 10n bits for a document collection of length n, for any 0<@e=<1. This is much smaller than the previous O(nlogn) bit data structures. Query time is O(m+qlog^@en) for listing and computing tf*idf scores for all q documents containing a given pattern of length m. Our data structures are flexible in a sense that they support queries for arbitrary patterns.", "We describe a legal question answering system which combines legal information retrieval and textual entailment. We have evaluated our system using the data from the first competition on legal information extraction entailment (COLIEE) 2014. The competition focuses on two aspects of legal information processing related to answering yes no questions from Japanese legal bar exams. The shared task consists of two phases: legal ad hoc information retrieval and textual entailment. The first phase requires the identification of Japan civil law articles relevant to a legal bar exam query. We have implemented two unsupervised baseline models (tf-idf and Latent Dirichlet Allocation (LDA)-based Information Retrieval (IR)), and a supervised model, Ranking SVM, for the task. The features of the model are a set of words, and scores of an article based on the corresponding baseline models. The results show that the Ranking SVM model nearly doubles the Mean Average Precision compared with both baseline models. The second phase is to answer “Yes” or “No” to previously unseen queries, by comparing the meanings of queries with relevant articles. The features used for phase two are syntactic semantic similarities and identification of negation antonym relations. The results show that our method, combined with rule-based model and the unsupervised model, outperforms the SVM-based supervised model." ] }
1901.10109
2913416354
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
As the metrics for textual relevance cannot fully represent the semantic relevance between user interest and text, recent work @cite_18 @cite_17 introduces topic models @cite_41 into social search, where user queries and elements are modeled as vectors in the topic space. The relevance between a query and an element is measured by cosine similarity. They define to retrieve @math most relevant elements to a query vector. However, existing methods typically consider the of results but ignore the . Therefore, the algorithms in @cite_18 @cite_17 cannot be used to process queries that emphasize the of results.
{ "cite_N": [ "@cite_41", "@cite_18", "@cite_17" ], "mid": [ "2141957180", "2082729696", "2113640060", "2139688392" ], "abstract": [ "Given a set @math of @math strings of total length @math , our task is to report the \"most relevant\"strings for a given query pattern @math . This involves somewhat more advanced query functionality than the usual pattern matching, as some notion of \"most relevant\" is involved. In information retrieval literature, this task is best achieved by using inverted indexes. However, inverted indexes work only for some predefined set of patterns. In the pattern matching community, the most popular pattern-matching data structures are suffix trees and suffix arrays. However, a typical suffix tree search involves going through all the occurrences of the pattern over the entire string collection, which might be a lot more than the required relevant documents. The first formal framework to study such kind of retrieval problems was given by [Muthukrishnan, 2002]. He considered two metrics for relevance: frequency and proximity. He took a threshold-based approach on these metrics and gave data structures taking @math words of space. We study this problem in a slightly different framework of reporting the top @math most relevant documents (in sorted order) under similar and more general relevance metrics. Our framework gives linear space data structure with optimal query times for arbitrary score functions. As a corollary, it improves the space utilization for the problems in [Muthukrishnan, 2002] while maintaining optimal query performance. We also develop compressed variants of these data structures for several specific relevance metrics.", "This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user.", "While numerous metrics for information retrieval are available in the case of binary relevance, there is only one commonly used metric for graded relevance, namely the Discounted Cumulative Gain (DCG). A drawback of DCG is its additive nature and the underlying independence assumption: a document in a given position has always the same gain and discount independently of the documents shown above it. Inspired by the \"cascade\" user model, we present a new editorial metric for graded relevance which overcomes this difficulty and implicitly discounts documents which are shown below very relevant documents. More precisely, this new metric is defined as the expected reciprocal length of time that the user will take to find a relevant document. This can be seen as an extension of the classical reciprocal rank to the graded relevance case and we call this metric Expected Reciprocal Rank (ERR). We conduct an extensive evaluation on the query logs of a commercial search engine and show that ERR correlates better with clicks metrics than other editorial metrics.", "This paper presents two new document ranking models for Web search based upon the methods of semantic representation and the statistical translation-based approach to information retrieval (IR). Assuming that a query is parallel to the titles of the documents clicked on for that query, large amounts of query-title pairs are constructed from clickthrough data; two latent semantic models are learned from this data. One is a bilingual topic model within the language modeling framework. It ranks documents for a query by the likelihood of the query being a semantics-based translation of the documents. The semantic representation is language independent and learned from query-title pairs, with the assumption that a query and its paired titles share the same distribution over semantic topics. The other is a discriminative projection model within the vector space modeling framework. Unlike Latent Semantic Analysis and its variants, the projection matrix in our model, which is used to map from term vectors into sematic space, is learned discriminatively such that the distance between a query and its paired title, both represented as vectors in the projected semantic space, is smaller than that between the query and the titles of other documents which have no clicks for that query. These models are evaluated on the Web search task using a real world data set. Results show that they significantly outperform their corresponding baseline models, which are state-of-the-art." ] }
1901.10109
2913416354
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
There have been extensive studies on social stream summarization @cite_30 @cite_26 @cite_22 @cite_19 @cite_6 @cite_25 @cite_31 @cite_32 : the problem of extracting a set of elements from social streams. @cite_26 @cite_22 propose a framework for social stream summarization based on dynamic clustering. @cite_32 focus on the personalized summarization problem that takes users' interests into account. Olariu @cite_19 devise a graph-based approach to abstractive social summarization. @cite_6 study the multimedia summarization problem on social streams. @cite_25 investigate the multi-view opinion summarization of social streams. Agarwal and Ramamritham @cite_30 propose a graph-based method for contextual summarization of social event streams. @cite_29 consider maintaining a sketch for a social stream to best preserve the latent topic distribution.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_29", "@cite_32", "@cite_6", "@cite_19", "@cite_31", "@cite_25" ], "mid": [ "1936155969", "2611254175", "2741807132", "179757531" ], "abstract": [ "In many decision-making scenarios, people can benefit from knowing what other people's opinions are. As more and more evaluative documents are posted on the Web, summarizing these useful resources becomes a critical task for many organizations and individuals. This paper presents a framework for summarizing a corpus of evaluative documents about a single entity by a natural language summary. We propose two summarizers: an extractive summarizer and an abstractive one. As an additional contribution, we show how our abstractive summarizer can be modified to generate summaries tailored to a model of the user preferences that is solidly grounded in decision theory and can be effectively elicited from users. We have tested our framework in three user studies. In the first one, we compared the two summarizers. They performed equally well relative to each other quantitatively, while significantly outperforming a baseline standard approach to multidocument summarization. Trends in the results as well as qualitative comments from participants suggest that the summarizers have different strengths and weaknesses. After this initial user study, we realized that the diversity of opinions expressed in the corpus (i.e., its controversiality) might play a critical role in comparing abstraction versus extraction. To clearly pinpoint the role of controversiality, we ran a second user study in which we controlled for the degree of controversiality of the corpora that were summarized for the participants. The outcome of this study indicates that for evaluative text abstraction tends to be more effective than extraction, particularly when the corpus is controversial. In the third user study we assessed the effectiveness of our user tailoring strategy. The results of this experiment confirm that user tailored summaries are more informative than untailored ones.", "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28 (absolute) in ROUGE-L scores.", "How can we summarize a dynamic data stream when elements selected for the summary can be deleted at any time? This is an important challenge in online services, where the users generating the data may decide to exercise their right to restrict the service provider from using (part of) their data due to privacy concerns. Motivated by this challenge, we introduce the dynamic deletion-robust submodular maximization problem. We develop the first resilient streaming algorithm, called ROBUST-STREAMING, with a constant factor approximation guarantee to the optimum solution. We evaluate the effectiveness of our approach on several real-world applications, including summarizing (1) streams of geo-coordinates (2); streams of images; and (3) click-stream log data, consisting of 45 million feature vectors from a news recommendation task.", "Many methods, including supervised and unsupervised algorithms, have been developed for extractive document summarization. Most supervised methods consider the summarization task as a two-class classification problem and classify each sentence individually without leveraging the relationship among sentences. The unsupervised methods use heuristic rules to select the most informative sentences into a summary directly, which are hard to generalize. In this paper, we present a Conditional Random Fields (CRF) based framework to keep the merits of the above two kinds of approaches while avoiding their disadvantages. What is more, the proposed framework can take the outcomes of previous methods as features and seamlessly integrate them. The key idea of our approach is to treat the summarization task as a sequence labeling problem. In this view, each document is a sequence of sentences and the summarization procedure labels the sentences by 1 and 0. The label of a sentence depends on the assignment of labels of others. We compared our proposed approach with eight existing methods on an open benchmark data set. The results show that our approach can improve the performance by more than 7.1 and 12.1 over the best supervised baseline and unsupervised baseline respectively in terms of two popular metrics F1 and ROUGE-2. Detailed analysis of the improvement is presented as well." ] }
1901.10109
2913416354
Massive volumes of data continuously generated on social platforms have become an important information source for users. A primary method to obtain fresh and valuable information from social streams is . Although there have been extensive studies on social search, existing methods only focus on the of query results but ignore the . In this paper, we propose a novel Semantic and Influence aware @math -Representative ( @math -SIR) query for social streams based on topic modeling. Specifically, we consider that both user queries and elements are represented as vectors in the topic space. A @math -SIR query retrieves a set of @math elements with the maximum over the sliding window at query time w.r.t. the query vector. The representativeness of an element set comprises both semantic and influence scores computed by the topic model. Subsequently, we design two approximation algorithms, namely (MTTS) and (MTTD), to process @math -SIR queries in real-time. Both algorithms leverage the ranked lists maintained on each topic for @math -SIR processing with theoretical guarantees. Extensive experiments on real-world datasets demonstrate the effectiveness of @math -SIR query compared with existing methods as well as the efficiency and scalability of our proposed algorithms for @math -SIR processing.
Submodular maximization has attracted a lot of research interest recently for its theoretical significance and wide applications. The standard approaches to submodular maximization with a cardinality constraint are the greedy heuristic @cite_0 and its improved version CELF @cite_36 , both of which are @math -approximate. Badanidiyuru and Vondrak @cite_27 propose several approximation algorithms for submodular maximization with general constraints. @cite_34 and @cite_12 study the submodular maximization problem in the distributed and streaming settings. @cite_40 and @cite_13 further investigate submodular maximization in the sliding window model. However, the above algorithms do not utilize any indices for acceleration and thus they are much less efficient for processing than and proposed in this paper.
{ "cite_N": [ "@cite_36", "@cite_0", "@cite_27", "@cite_40", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "2547615739", "2179494254", "1542515633", "2540677289" ], "abstract": [ "In this paper we study the extraction of representative elements in the data stream model in the form of submodular maximization. Different from the previous work on streaming submodular maximization, we are interested only in the recent data, and study the maximization problem over sliding windows. We provide a general reduction from the sliding window model to the standard streaming model, and thus our approach works for general constraints as long as there is a corresponding streaming algorithm in the standard streaming model. As a consequence, we obtain the first algorithms in the sliding window model for maximizing a monotone non-monotone submodular function under cardinality and matroid constraints. We also propose several heuristics and show their efficiency in real-world datasets.", "We consider the Unconstrained Submodular Maximization problem in which we are given a nonnegative submodular function @math , and the objective is to find a subset @math maximizing @math . This is one of the most basic submodular optimization problems, having a wide range of applications. Some well-known problems captured by Unconstrained Submodular Maximization include Max-Cut, Max-DiCut, and variants of Max-SAT and maximum facility location. We present a simple randomized linear time algorithm achieving a tight approximation guarantee of 1 2, thus matching the known hardness result of Feige, Mirrokni, and Vondrak [SIAM J. Comput., 40 (2011), pp. 1133--1153]. Our algorithm is based on an adaptation of the greedy approach which exploits certain symmetry properties of the problem.", "Consider a suboptimal solution S for a maximization problem. Suppose S's value is small compared to an optimal solution OPT to the problem, yet S is structurally similar to OPT. A natural question in this setting is whether there is a way of improving S based solely on this information. In this paper we introduce the Structural Continuous Greedy Algorithm, answering this question affirmatively in the setting of the Nonmonotone Submodular Maximization Problem. We improve on the best approximation factor known for this problem. In the Nonmonotone Submodular Maximization Problem we are given a non-negative submodular function f, and the objective is to find a subset maximizing f. Our method yields an 0.42-approximation for this problem, improving on the current best approximation factor of 0.41 given by Gharan and Vondrak [5]. On the other hand, [4] showed a lower bound of 0.5 for this problem.", "Maximizing submodular functions under cardinality constraints lies at the core of numerous data mining and machine learning applications, including data diversification, data summarization, and coverage problems. In this work, we study this question in the context of data streams, where elements arrive one at a time, and we want to design low-memory and fast update-time algorithms that maintain a good solution. Specifically, we focus on the sliding window model, where we are asked to maintain a solution that considers only the last W items. In this context, we provide the first non-trivial algorithm that maintains a provable approximation of the optimum using space sublinear in the size of the window. In particular we give a 1 3 - e approximation algorithm that uses space polylogarithmic in the spread of the values of the elements, δ, and linear in the solution size k for any constant e > 0. At the same time, processing each element only requires a polylogarithmic number of evaluations of the function itself. When a better approximation is desired, we show a different algorithm that, at the cost of using more memory, provides a 1 2 - e approximation, and allows a tunable trade-off between average update time and space. This algorithm matches the best known approximation guarantees for submodular optimization in insertion-only streams, a less general formulation of the problem. We demonstrate the efficacy of the algorithms on a number of real world datasets, showing that their practical performance far exceeds the theoretical bounds. The algorithms preserve high quality solutions in streams with millions of items, while storing a negligible fraction of them." ] }
1901.10076
2914865839
We study the learnability of a class of compact operators known as Schatten--von Neumann operators. These operators between infinite-dimensional function spaces play a central role in a variety of applications in learning theory and inverse problems. We address the question of sample complexity of learning Schatten-von Neumann operators and provide an upper bound on the number of measurements required for the empirical risk minimizer to generalize with arbitrary precision and probability, as a function of class parameter @math . Our results give generalization guarantees for regression of infinite-dimensional signals from infinite-dimensional data. Next, we adapt the representer theorem of Abernethy to show that empirical risk minimization over an a priori infinite-dimensional, non-compact set, can be converted to a convex finite dimensional optimization problem over a compact set. In summary, the class of @math -Schatten--von Neumann operators is probably approximately correct (PAC)-learnable via a practical convex program for any @math .
On the algorithmic side Abernethy @cite_10 propose learning algorithms for a problem related to ours. They show how in the context of collaborative filtering, a number of existing algorithms can be abstractly modeled as learning compact operators, and derive a representer theorem which casts the problem as optimization over matrices for general losses and regularizers.
{ "cite_N": [ "@cite_10" ], "mid": [ "1666942233", "2963085847", "2201744460", "2162221686" ], "abstract": [ "This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 . An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data.", "We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference to learn this powerful generative model. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.", "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "In this paper, we propose a new and computationally efficient framework for learning sparse models. We formulate a unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof. The supervised training of the proposed model is formulated as a bilevel optimization problem, in which the operators are optimized to achieve the best possible performance on a specific task, e.g., reconstruction or classification. By restricting the operators to be shift invariant, our approach can be thought as a way of learning analysis+synthesis sparsity-promoting convolutional operators. Leveraging recent ideas on fast trainable regressors designed to approximate exact sparse codes, we propose a way of constructing feed-forward neural networks capable of approximating the learned models at a fraction of the computational cost of exact solvers. In the shift-invariant case, this leads to a principled way of constructing task-specific convolutional networks. We illustrate the proposed models on several experiments in music analysis and image processing applications." ] }
1901.09953
2911736397
Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32 * 32 or 128 * 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374 * 374 in PASCAL2.
In order to generate high-resolution images from low-resolution images, the model SRGAN @cite_19 is proposed to realize super-resolution of images. It uses CNN for extracting features from low-resolution images. The model of SRGAN testifies the strong capability of generative models in applications of images super-resolution. Another popular and successful modification of the GAN is DCGAN @cite_20 comprising transposed CNNs, especially for images-related applications for unsupervised learning in computer vision. Convolutional strides and transposed convolution are applied for the downsampling and upsampling. However, even with DCGAN, the bottleneck of GAN could be achieved easily for large images, which is that increasing the complexity of the generator does not necessarily improve the image quality. Moreover, StackGAN @cite_4 uses a two-stage GAN to generate images of size @math , which are relatively large images for state-of-art generative models.
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_20" ], "mid": [ "2523714292", "2963470893", "2798844427", "2949257576" ], "abstract": [ "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.", "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL" ] }
1901.09953
2911736397
Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32 * 32 or 128 * 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374 * 374 in PASCAL2.
However, image representation in pixel space may not be an efficient way as in the traditional GANs. Tensor representation based methods have been adopted recently. Recent papers @cite_18 @cite_1 apply tensor representation for dictionary learning with smaller dictionary size and better results than the traditional methods. Some theoretical analysis for tensor decomposition and its application are provided in @cite_7 with details. Tensor decomposition lies in the core status of tensor-based methods, which provide an alternative representation mean for data such as large images.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_7" ], "mid": [ "2181101938", "2153496554", "1548467509", "2963225922" ], "abstract": [ "Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the low-rank constrained CNNs delivers significantly better performance than their non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves @math accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs.", "Confronted with the high-dimensional tensor-like visual data, we derive a method for the decomposition of an observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns. The optimal rank-(R1,R2, ...Rn) tensor decomposition model that we propose in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns. Consequently, our method accounts for the implicit multi-factor structure of tensor-like visual data in an explicit and concise manner. In addition, the optimal tensor decomposition is formulated as a convex optimization through relaxation technique. We then develop a block coordinate descent (BCD) based algorithm to efficiently solve the problem. In experiments, we show several applications of our method in computer vision and the results are promising.", "In this paper we propose novel methods for compression and recovery of multilinear data under limited sampling. We exploit the recently proposed tensor- Singular Value Decomposition (t-SVD)[1], which is a group theoretic framework for tensor decomposition. In contrast to popular existing tensor decomposition techniques such as higher-order SVD (HOSVD), t-SVD has optimality properties similar to the truncated SVD for matrices. Based on t-SVD, we first construct novel tensor-rank like measures to characterize informational and structural complexity of multilinear data. Following that we outline a complexity penalized algorithm for tensor completion from missing entries. As an application, 3-D and 4-D (color) video data compression and recovery are considered. We show that videos with linear camera motion can be represented more efficiently using t-SVD compared to traditional approaches based on vectorizing or flattening of the tensors. Application of the proposed tensor completion algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. In conclusion we point out several research directions and implications to online prediction of multilinear data.", "Abstract: Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the low-rank constrained CNNs delivers significantly better performance than their non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves @math accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs." ] }
1901.10080
2912501354
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
In the context of fairness, most of the papers in literature address the problem of binary classification task with categorical (or even binary) sensitive features @cite_17 @cite_38 ; a broad review on classification with categorical sensitive feature is provided in @cite_8 . This task is indeed very important, because it is strictly related to the possibility of having access to specific benefits (e.g. loans) without being discriminated due to gender or ethnic characteristics. On the other hand, the set of problems solvable by using these methods is limited and not comprehensive of all the real-world case scenarios.
{ "cite_N": [ "@cite_38", "@cite_8", "@cite_17" ], "mid": [ "2790744245", "2116984840", "2040825624", "2150997454" ], "abstract": [ "We present a systematic approach for achieving fairness in a binary classification setting. While we focus on two well-known quantitative definitions of fairness, our approach encompasses many other previously studied definitions as special cases. The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints. We introduce two reductions that work for any representation of the cost-sensitive classifier and compare favorably to prior baselines on a variety of data sets, while overcoming several of their disadvantages.", "Recently, the following Discrimination-Aware Classification Problem was introduced: Suppose we are given training data that exhibit unlawful discrimination; e.g., toward sensitive attributes such as gender or ethnicity. The task is to learn a classifier that optimizes accuracy, but does not have this discrimination in its predictions on test data. This problem is relevant in many settings, such as when the data are generated by a biased decision process or when the sensitive attribute serves as a proxy for unobserved features. In this paper, we concentrate on the case with only one binary sensitive attribute and a two-class classification problem. We first study the theoretically optimal trade-off between accuracy and non-discrimination for pure classifiers. Then, we look at algorithmic solutions that preprocess the data to remove discrimination before a classifier is learned. We survey and extend our existing data preprocessing techniques, being suppression of the sensitive attribute, massaging the dataset by changing class labels, and reweighing or resampling the data to remove discrimination without relabeling instances. These preprocessing techniques have been implemented in a modified version of Weka and we present the results of experiments on real-life data.", "With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect people's lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be socially and legally fair from a viewpoint of social responsibility, namely, it must be unbiased and nondiscriminatory in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. From a privacy-preserving viewpoint, this can be interpreted as hiding sensitive information when classification results are observed. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.", "Recently, the following discrimination aware classification problem was introduced: given a labeled dataset and an attribute B, find a classifier with high predictive accuracy that at the same time does not discriminate on the basis of the given attribute B. This problem is motivated by the fact that often available historic data is biased due to discrimination, e.g., when B denotes ethnicity. Using the standard learners on this data may lead to wrongfully biased classifiers, even if the attribute B is removed from training data. Existing solutions for this problem consist in “cleaning away” the discrimination from the dataset before a classifier is learned. In this paper we study an alternative approach in which the non-discrimination constraint is pushed deeply into a decision tree learner by changing its splitting criterion and pruning strategy. Experimental evaluation shows that the proposed approach advances the state-of-the-art in the sense that the learned decision trees have a lower discrimination than models provided by previous methods, with little loss in accuracy." ] }
1901.10080
2912501354
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Focusing on the works able to handle regression tasks, we can divide them by the type of problems they are able to solve and the notion of fairness they exploit. As we will see, with very few exceptions -- e.g. @cite_24 -- most of the methods in literature are not able to deal with both classification and regression task and with both numerical and categorical sensitive features with an unified approach supported by theoretical consistency results. In fact, they introduce task oriented notions of fairness and or do address the statistical consistency of their method with respect to the risk and the fairness measure employed.
{ "cite_N": [ "@cite_24" ], "mid": [ "2912501354", "1664169458", "2127508398", "2766939712" ], "abstract": [ "We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.", "We consider high dimensional sparse regression, and develop strategies able to deal with arbitrary -- possibly, severe or coordinated -- errors in the covariance matrix @math . These may come from corrupted data, persistent experimental errors, or malicious respondents in surveys recommender systems, etc. Such non-stochastic error-in-variables problems are notoriously difficult to treat, and as we demonstrate, the problem is particularly pronounced in high-dimensional settings where the primary goal is support recovery of the sparse regressor. We develop algorithms for support recovery in sparse regression, when some number @math out of @math total covariate response pairs are arbitrarily (possibly maliciously) corrupted . We are interested in understanding how many outliers, @math , we can tolerate, while identifying the correct support. To the best of our knowledge, neither standard outlier rejection techniques, nor recently developed robust regression algorithms (that focus only on corrupted response variables), nor recent algorithms for dealing with stochastic noise or erasures, can provide guarantees on support recovery. Perhaps surprisingly, we also show that the natural brute force algorithm that searches over all subsets of @math covariate response pairs, and all subsets of possible support coordinates in order to minimize regression error, is remarkably poor, unable to correctly identify the support with even @math corrupted points, where @math is the sparsity. This is true even in the basic setting we consider, where all authentic measurements and noise are independent and sub-Gaussian. In this setting, we provide a simple algorithm -- no more computationally taxing than OMP -- that gives stronger performance guarantees, recovering the support with up to @math corrupted points, where @math is the dimension of the signal to be recovered.", "We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty we show that even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are Ω(1)-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up Ω(1)-fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.", "Algorithmic decision making process now affects many aspects of our lives. Standard tools for machine learning, such as classification and regression, are subject to the bias in data, and thus direct application of such off-the-shelf tools could lead to a specific group being unfairly discriminated. Removing sensitive attributes of data does not solve this problem because a can arise when non-sensitive attributes and sensitive attributes are correlated. Here, we study a fair machine learning algorithm that avoids such a disparate impact when making a decision. Inspired by the two-stage least squares method that is widely used in the field of economics, we propose a two-stage algorithm that removes bias in the training data. The proposed algorithm is conceptually simple. Unlike most of existing fair algorithms that are designed for classification tasks, the proposed method is able to (i) deal with regression tasks, (ii) combine explanatory attributes to remove reverse discrimination, and (iii) deal with numerical sensitive attributes. The performance and fairness of the proposed algorithm are evaluated in simulations with synthetic and real-world datasets." ] }
1901.10080
2912501354
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
The largest family of methods tackle regression problems with (single) categorical or binary sensitive feature @cite_29 @cite_5 @cite_30 @cite_14 . For example, in @cite_29 , a convex approach for regression is proposed, where the authors use a specific definition of fairness in order to have models which treat similar examples in a similar way, in the sense of the predicted outcome. The authors tackle the problem by introducing a new convex regularizer and by imposing this notion on different regression tasks. Another example is @cite_30 , where the authors use an adapted version of Demographic Parity @cite_1 for classification, in the context of regression.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_29", "@cite_1", "@cite_5" ], "mid": [ "2912501354", "2947175317", "1961345416", "1800334520" ], "abstract": [ "We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.", "Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a fair'' subspace. We apply this method to densest subgraph and @math -means. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of @math is NP-hard and we show a polynomial time algorithm with a matching approximation bound. We further apply our method to @math -means. In a previous paper, [NIPS 2017] showed that problems such as @math -means can be approximated up to a constant factor while ensuring that none of two protected class (e.g., gender, ethnicity) is disparately impacted. We show that fair projections generalize the concept of fairlet introduced by to any number of protected attributes and improve empirically the quality of the resulting clustering. We also present the first constant-factor approximation for an arbitrary number of protected attributes thus settling an open problem recently addressed in several works.", "With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals' lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.", "Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the @math minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. @PARASPLIT This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. @PARASPLIT These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone." ] }
1901.10080
2912501354
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
Reducing the regression problem to have only categorical sensitive features is a serious limitation. In this sense, few interesting papers present regression methods able to deal with continuous sensitive attributes @cite_24 @cite_2 @cite_10 . Differently to our approach, the authors impose other definitions of fairness (e.g. Disparate Impact @cite_38 or even ad-hoc brand new definitions). Moreover, it is important to note that these methods do not naturally extend to the case of not-continuous sensitive attributes.
{ "cite_N": [ "@cite_24", "@cite_38", "@cite_10", "@cite_2" ], "mid": [ "2947175317", "2912501354", "2963803533", "2040825624" ], "abstract": [ "Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a fair'' subspace. We apply this method to densest subgraph and @math -means. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of @math is NP-hard and we show a polynomial time algorithm with a matching approximation bound. We further apply our method to @math -means. In a previous paper, [NIPS 2017] showed that problems such as @math -means can be approximated up to a constant factor while ensuring that none of two protected class (e.g., gender, ethnicity) is disparately impacted. We show that fair projections generalize the concept of fairlet introduced by to any number of protected attributes and improve empirically the quality of the resulting clustering. We also present the first constant-factor approximation for an arbitrary number of protected attributes thus settling an open problem recently addressed in several works.", "We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.", "We address the problem of algorithmic fairness: ensuring that sensitive variables do not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our approach. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple data preprocessing step. Experiments indicate that the method is empirically effective and performs favorably against state-of-the-art approaches.", "With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect people's lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be socially and legally fair from a viewpoint of social responsibility, namely, it must be unbiased and nondiscriminatory in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. From a privacy-preserving viewpoint, this can be interpreted as hiding sensitive information when classification results are observed. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency." ] }
1901.10073
2913888055
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
OBJDigger, presented by @cite_25 , uses symbolic execution and inter-procedural data flow analysis to recover object instances, data members and methods of the same class. This is achieved by tracking the usage and propagation of the within and between functions. While the authors did not attempt to recover class inheritance, a method to achieve that was described. However, this can only identify primary base class since they assume that a base class will write its vptr only in the zero offset from the object address. A secondary base class will write to a positive non-zero offset from the object address but that was not accounted for.
{ "cite_N": [ "@cite_25" ], "mid": [ "2144981449", "1986223934", "1986108927", "2154522949" ], "abstract": [ "Object-oriented programming complicates the already difficult task of reverse engineering software, and is being used increasingly by malware authors. Unlike traditional procedural-style code, reverse engineers must understand the complex interactions between object-oriented methods and the shared data structures with which they operate on, a tedious manual process. In this paper, we present a static approach that uses symbolic execution and inter-procedural data flow analysis to discover object instances, data members, and methods of a common class. The key idea behind our work is to track the propagation and usage of a unique object instance reference, called a this pointer. Our goal is to help malware reverse engineers to understand how classes are laid out and to identify their methods. We have implemented our approach in a tool called ObJDIGGER, which produced encouraging results when validated on real-world malware samples.", "In the context of reverse-engineering project we designed a use-case specification recovery technique for legacy information systems. With our technique, we can recover the alternative flows of each use-case of the system. It is based on a dynamic (i.e. runtime) analysis of the working of the system using execution traces. But \"traditional\" execution trace format do not contain enough information for this approach to work. Then we designed a new execution trace format together with the associated tool to get the program's dynamic decision tree corresponding to each of the use-case scenario. These trees are then processed to find the possible variants from the main scenario of each use-case. In this paper we first present our approach to the use-case specification recovery technique and the new trace format we designed. Then the decision tree compression technique is showed with a feasibility study. The contribution of the paper is our approach to the recovery of legacy systems' use-case, the new trace format and the decision tree processing technique.", "In mainstream OO languages, inheritance can be used to add new methods, or to override existing methods. Virtual classes and feature oriented programming are techniques which extend the mechanism of inheritance so that it is possible to refine nested classes as well. These techniques are attractive for programming in the large, because inheritance becomes a tool for manipulating whole class hierarchies rather than individual classes. Nevertheless, it has proved difficult to design static type systems for virtual classes, because virtual classes introduce dependent types. The compile-time type of an expression may depend on the run-time values of objects in that expression.We present a formal object calculus which implements virtual classes in a type-safe manner. Our type system uses a novel technique based on prototypes, which blur the distinction between compile-time and run-time. At run-time, prototypes act as objects, and they can be used in ordinary computations. At compile-time, they act as types. Prototypes are similar in power to dependent types, and subtyping is shown to be a form of partial evaluation. We prove that prototypes are type-safe but undecidable, and briefly outline a decidable semi-algorithm for dealing with them.", "This paper presents a method for automatic reconstruction of polymorphic class hierarchies from the assembly code obtained by compiling a C++ program. If the program is compiled with run-time type information (RTTI), class hierarchy is reconstructed via analysis of RTTI structures. In case RTTI structures are missing in the assembly, a technique based on the analysis of virtual function tables, constructors and destructors is used. A tool for automatic reconstruction of polymorphic class hierarchies that implements the described technique is presented. This tool is implemented as a plug in for IDA Pro Interactive Disassembler. Experimental study of the tool is provided." ] }
1901.10073
2913888055
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
@cite_13 presented SmartDec which partially recovers certain C++ specific language constructs statically. It attempts to recover classes and their inheritance, virtual and non-virtual member functions, calls to virtual functions, exception raising and handling statements. Its main limitation is the inability to differentiate between between inheritance and composition which results in wrong relationship inference.
{ "cite_N": [ "@cite_13" ], "mid": [ "1986108927", "2890042297", "1974715526", "144724653" ], "abstract": [ "In mainstream OO languages, inheritance can be used to add new methods, or to override existing methods. Virtual classes and feature oriented programming are techniques which extend the mechanism of inheritance so that it is possible to refine nested classes as well. These techniques are attractive for programming in the large, because inheritance becomes a tool for manipulating whole class hierarchies rather than individual classes. Nevertheless, it has proved difficult to design static type systems for virtual classes, because virtual classes introduce dependent types. The compile-time type of an expression may depend on the run-time values of objects in that expression.We present a formal object calculus which implements virtual classes in a type-safe manner. Our type system uses a novel technique based on prototypes, which blur the distinction between compile-time and run-time. At run-time, prototypes act as objects, and they can be used in ordinary computations. At compile-time, they act as types. Prototypes are similar in power to dependent types, and subtyping is shown to be a form of partial evaluation. We prove that prototypes are type-safe but undecidable, and briefly outline a decidable semi-algorithm for dealing with them.", "High-level C++ source code abstractions such as classes and methods greatly assist human analysts and automated algorithms alike when analyzing C++ programs. Unfortunately, these abstractions are lost when compiling C++ source code, which impedes the understanding of C++ executables. In this paper, we propose a system, OOAnalyzer, that uses an innovative new design to statically recover detailed C++ abstractions from executables in a scalable manner. OOAnalyzer's design is motivated by the observation that many human analysts reason about C++ programs by recognizing simple patterns in binary code and then combining these findings using logical inference, domain knowledge, and intuition. We codify this approach by combining a lightweight symbolic analysis with a flexible Prolog-based reasoning system. Unlike most existing work, OOAnalyzer is able to recover both polymorphic and non-polymorphic C++ classes. We show in our evaluation that OOAnalyzer assigns over 78 of methods to the correct class on our test corpus, which includes both malware and real-world software such as Firefox and MySQL. These recovered abstractions can help analysts understand the behavior of C++ malware and cleanware, and can also improve the precision of program analyses on C++ executables.", "Languages that lack static typing are ubiquitous in the world of mobile and web applications. The rapid rise of larger applications like interactive web GUIs, games, and cryptography presents a new range of implementation challenges for modern virtual machines to close the performance gap between typed and untyped languages. While all languages can benefit from efficient automatic memory management, languages like JavaScript present extra thrill with innocent-looking but difficult features like dynamically-sized arrays, deletable properties, and prototypes. Optimizing such languages requires complex dynamic techniques with more radical object layout strategies such as dynamically evolving representations for arrays. This paper presents a general approach for gathering temporal allocation site feedback that tackles both the general problem of object lifetime estimation and improves optimization of these problematic language features. We introduce a new implementation technique where allocation mementos processed by the garbage collector and runtime system efficiently tie objects back to allocation sites in the program and dynamically estimate object lifetime, representation, and size to inform three optimizations: pretenuring, pretransitioning, and presizing. Unlike previous work on pretenuring, our system utilizes allocation mementos to achieve fully dynamic allocation-site-based pretenuring in a production system. We implement all of our techniques in V8, a high performance virtual machine for JavaScript, and demonstrate solid performance improvements across a range of benchmarks.", "Abstract This paper shows how to integrate two complementary techniques for manipulating program invariants: dynamic detection and static verification. Dynamic detection proposes likely invariants based on program executions, but the resulting properties are not guaranteed to be true over all possible executions. Static verification checks that properties are always true, but it can be difficult and tedious to select a goal and to annotate programs for input to a static checker. Combining these techniques overcomes the weaknesses of each: dynamically detected invariants can annotate a program or provide goals for static verification, and static verification can confirm properties proposed by a dynamic tool. We have integrated a tool for dynamically detecting likely program invariants, Daikon, with a tool for statically verifying program properties, ESC Java. Daikon examines run-time values of program variables; it looks for patterns and relationships in those values, and it reports properties that are never falsified during test runs and that satisfy certain other conditions, such as being statistically justified. ESC Java takes as input a Java program annotated with preconditions, postconditions, and other assertions, and it reports which annotations cannot be statically verified and also warns of potential runtime errors, such as null dereferences and out-of-bounds array indices. Our prototype system runs Daikon, inserts its output into code as ESC Java annotations, and then runs ESC Java, which reports unverifiable annotations. The entire process is completely automatic, though users may provide guidance in order to improve results if desired. In preliminary experiments, ESC Java verified all or most of the invariants proposed by Daikon." ] }
1901.10073
2913888055
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
Rewards @cite_19 is one of many (e.g., TIE @cite_27 , Laika @cite_18 ) data structure reverse engineering tools to infer type information from binaries. It uses dynamic analysis to recover syntax and semantics of data structures observed during execution. Rewards only attempts to infer primitive data types of variables and their semantics.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_18" ], "mid": [ "2504609973", "191489030", "2129364433", "1986223934" ], "abstract": [ "With only the binary executable of a program, it is useful to discover the program's data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program's runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable's type gets resolved if it is involved in a type-revealing execution point or type sink. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables re-using the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery.", "A recurring problem in security is reverse engineering binary code to recover high-level language data abstractions and types. High-level programming languages have data abstractions such as buffers, structures, and local variables that all help programmers and program analyses reason about programs in a scalable manner. During compilation, these abstractions are removed as code is translated down to operations on registers and one globally addressed memory region. Reverse engineering consists of “undoing” the compilation to recover high-level information so that programmers, security professionals, and analyses can all more easily reason about the binary code. In this paper we develop novel techniques for reverse engineering data type abstractions from binary programs. At the heart of our approach is a novel type reconstruction system based upon binary code analysis. Our techniques and system can be applied as part of both static or dynamic analysis, thus are extensible to a large number of security settings. Our results on 87 programs show that TIE is both more accurate and more precise at recovering high-level types than existing mechanisms.", "A critical aspect of malware forensics is authorship analysis. The successful outcome of such analysis is usually determined by the reverse engineer’s skills and by the volume and complexity of the code under analysis. To assist reverse engineers in such a tedious and error-prone task, it is desirable to develop reliable and automated tools for supporting the practice of malware authorship attribution. In a recent work, machine learning was used to rank and select syntax-based features such as n-grams and flow graphs. The experimental results showed that the top ranked features were unique for each author, which was regarded as an evidence that those features capture the author’s programming styles. In this paper, however, we show that the uniqueness of features does not necessarily correspond to authorship. Specifically, our analysis demonstrates that many “unique” features selected using this method are clearly unrelated to the authors’ programming styles, for example, unique IDs or random but unique function names generated by the compiler; furthermore, the overall accuracy is generally unsatisfactory. Motivated by this discovery, we propose a layered Onion Approach for Binary Authorship Attribution called OBA2. The novelty of our approach lies in the three complementary layers: preprocessing, syntax-based attribution, and semantic-based attribution. Experiments show that our method produces results that not only are more accurate but have a meaningful connection to the authors’ styles. a 2014 The Author. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http: creativecommons.org licenses by-nc-nd 3.0 ).", "In the context of reverse-engineering project we designed a use-case specification recovery technique for legacy information systems. With our technique, we can recover the alternative flows of each use-case of the system. It is based on a dynamic (i.e. runtime) analysis of the working of the system using execution traces. But \"traditional\" execution trace format do not contain enough information for this approach to work. Then we designed a new execution trace format together with the associated tool to get the program's dynamic decision tree corresponding to each of the use-case scenario. These trees are then processed to find the possible variants from the main scenario of each use-case. In this paper we first present our approach to the use-case specification recovery technique and the new trace format we designed. Then the decision tree compression technique is showed with a feasibility study. The contribution of the paper is our approach to the recovery of legacy systems' use-case, the new trace format and the decision tree processing technique." ] }
1901.10073
2913888055
Recovering class inheritance from C++ binaries has several security benefits including problems such as decompilation and program hardening. Thanks to the optimization guidelines prescribed by the C++ standard, commercial C++ binaries tend to be optimized. While state-of-the-art class inheritance inference solutions are effective in dealing with unoptimized code, their efficacy is impeded by optimization. Particularly, constructor inlining--or worse exclusion--due to optimization render class inheritance recovery challenging. Further, while modern solutions such as MARX can successfully group classes within an inheritance sub-tree, they fail to establish directionality of inheritance, which is crucial for security-related applications (e.g. decompilation). We implemented a prototype of DeClassifier using Binary Analysis Platform (BAP) and evaluated DeClassifier against 16 binaries compiled using gcc under multiple optimization settings. We show that (1) DeClassifier can recover 94.5 and 71.4 true positive directed edges in the class hierarchy tree under O0 and O2 optimizations respectively, (2) a combination of ctor+dtor analysis provides much better inference than ctor only analysis.
OOAnalyzer @cite_5 mainly groups methods into classes by combining traditional binary analysis, symbolic analysis and Prolog-based reasoning. The paper explained that class size and VTable size can be considered to decide inheritance. Since OOAnalyzer also considers non-polymorphic classes, one would assume that class size will be relied upon more for this. However, this was not evaluated, therefore, there is no way to confirm the claim that OOAnalyzer can decide inheritance.
{ "cite_N": [ "@cite_5" ], "mid": [ "2890042297", "1986108927", "1586014638", "2015729052" ], "abstract": [ "High-level C++ source code abstractions such as classes and methods greatly assist human analysts and automated algorithms alike when analyzing C++ programs. Unfortunately, these abstractions are lost when compiling C++ source code, which impedes the understanding of C++ executables. In this paper, we propose a system, OOAnalyzer, that uses an innovative new design to statically recover detailed C++ abstractions from executables in a scalable manner. OOAnalyzer's design is motivated by the observation that many human analysts reason about C++ programs by recognizing simple patterns in binary code and then combining these findings using logical inference, domain knowledge, and intuition. We codify this approach by combining a lightweight symbolic analysis with a flexible Prolog-based reasoning system. Unlike most existing work, OOAnalyzer is able to recover both polymorphic and non-polymorphic C++ classes. We show in our evaluation that OOAnalyzer assigns over 78 of methods to the correct class on our test corpus, which includes both malware and real-world software such as Firefox and MySQL. These recovered abstractions can help analysts understand the behavior of C++ malware and cleanware, and can also improve the precision of program analyses on C++ executables.", "In mainstream OO languages, inheritance can be used to add new methods, or to override existing methods. Virtual classes and feature oriented programming are techniques which extend the mechanism of inheritance so that it is possible to refine nested classes as well. These techniques are attractive for programming in the large, because inheritance becomes a tool for manipulating whole class hierarchies rather than individual classes. Nevertheless, it has proved difficult to design static type systems for virtual classes, because virtual classes introduce dependent types. The compile-time type of an expression may depend on the run-time values of objects in that expression.We present a formal object calculus which implements virtual classes in a type-safe manner. Our type system uses a novel technique based on prototypes, which blur the distinction between compile-time and run-time. At run-time, prototypes act as objects, and they can be used in ordinary computations. At compile-time, they act as types. Prototypes are similar in power to dependent types, and subtyping is shown to be a form of partial evaluation. We prove that prototypes are type-safe but undecidable, and briefly outline a decidable semi-algorithm for dealing with them.", "Five alternative methods are proposed to perform multi-class classification tasks using genetic programming. These methods are: (1) binary decomposition, in which the problem is decomposed into a set of binary problems and standard genetic programming methods are applied; (2) static range selection, where the set of real values returned by a genetic program is divided into class boundaries using arbitrarily-chosen division points; (3) dynamic range selection, in which a subset of training examples are used to determine where, over the set of reals, class boundaries lie; (4) class enumeration, which constructs programs similar in syntactic structure to a decision tree; and (5) evidence accumulation, which allows separate branches of the program to add to the certainty of any given class. The results show that the dynamic range selection method is well-suited to the task of multi-class classification and is capable of producing classifiers that are more accurate than the other methods tried when comparable training times are allowed. The accuracy of the generated classifiers was comparable to alternative approaches over several data sets.", "Abstract One goal of this paper is to empirically explore the relationships between existing object-oriented (OO) coupling, cohesion, and inheritance measures and the probability of fault detection in system classes during testing. In other words, we wish to better understand the relationship between existing design measurement in OO systems and the quality of the software developed. The second goal is to propose an investigation and analysis strategy to make these kind of studies more repeatable and comparable, a problem which is pervasive in the literature on quality measurement. Results show that many of the measures capture similar dimensions in the data set, thus reflecting the fact that many of them are based on similar principles and hypotheses. However, it is shown that by using a subset of measures, accurate models can be built to predict which classes most of the faults are likely to lie in. When predicting fault-prone classes, the best model shows a percentage of correct classifications higher than 80 and finds more than 90 of faulty classes. Besides the size of classes, the frequency of method invocations and the depth of inheritance hierarchies seem to be the main driving factors of fault-proneness." ] }
1901.09858
2913816046
Differential privacy mechanisms that also make reconstruction of the data impossible come at a cost - a decrease in utility. In this paper, we tackle this problem by designing a private data release mechanism that makes reconstruction of the original data impossible and also preserves utility for a wide range of machine learning algorithms. We do so by combining the Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace distribution. While the JL transform can itself provide privacy guarantees blocki2012johnson and make reconstruction impossible, we do not rely on its differential privacy properties and only utilize its ability to make reconstruction impossible. We present novel proofs to show that our mechanism is differentially private under single element changes as well as single row changes to any database. In order to show utility, we prove that our mechanism maintains pairwise distances between points in expectation and also show that its variance is proportional to the dimensionality of the subspace we project the data into. Finally, we experimentally show the utility of our mechanism by deploying it on the task of clustering.
@cite_22 developed a randomization mechanism that utilized the JL transform and the Gaussian mechanism @cite_28 to provide non-interactive differential privacy with respect to attribute changes. They showed that their mechanism preserved utility by preserving distances in expectation. However, a shortcoming of this approach was that the privacy guarantees were only provided with respect to attribute changes, and not row level changes, which is a more realistic requirement in practice. Despite that shortcoming, the mechanism was powerful from a privacy perspective, as it had been shown by @cite_23 that random projection-based multiplicative perturbation techniques make it impossible to find the exact values of the original data in addition to simply hiding the dimensionality of the data. Further, they showed that if even if the projection matrix is released, the adversary still cannot find the exact value of any elements from the original data.
{ "cite_N": [ "@cite_28", "@cite_22", "@cite_23" ], "mid": [ "2783547004", "2154086287", "2781521040", "2949485285" ], "abstract": [ "Various paradigms, based on differential privacy, have been proposed to release a privacy-preserving dataset with statistical approximation. Nonetheless, most existing schemes are limited when facing highly correlated attributes, and cannot prevent privacy threats from untrusted servers. In this paper, we propose a novel Copula- based scheme to efficiently synthesize and release multi-dimensional crowdsourced data with local differential privacy. In our scheme, each participant's (or user's) data is locally transformed into bit strings based on a randomized response technique, which guarantees a participant's privacy on the participant (user) side. Then, Copula theory is leveraged to synthesize multi-dimensional crowdsourced data based on univariate marginal distribution and attribute dependence. Univariate marginal distribution is estimated by the Lasso-based regression algorithm from the aggregated privacy- preserving bit strings. Dependencies among attributes are modeled as multivariate Gaussian Copula, of which parameter is estimated by Pearson correlation coefficients. We conduct experiments to validate the effectiveness of our scheme. Our experimental results demonstrate that our scheme is effective for the release of multi-dimensional data with local differential privacy guaranteed to distributed participants.", "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism M* -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user u, no matter what its side information and preferences, derives as much utility from M* as from interacting with a differentially private mechanism Mu that is optimally tailored to u. More precisely, for every user u there is an optimal mechanism Mu for it that factors into a user-independent part (the geometric mechanism M*) followed by user-specific post-processing that can be delegated to the user itself. The first part of our proof of this result characterizes the optimal differentially private mechanism for a fixed but arbitrary user in terms of a certain basic feasible solution to a linear program with constraints that encode differential privacy. The second part shows that all of the relevant vertices of this polytope (ranging over all possible users) are derivable from the geometric mechanism via suitable remappings of its range.", "Differential privacy mechanism design has traditionally been tailored for a scalar-valued query function. Although many mechanisms such as the Laplace and Gaussian mechanisms can be extended to a matrix-valued query function by adding i.i.d. noise to each element of the matrix, this method is often suboptimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis. To address this challenge, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and we rigorously prove that the MVG mechanism preserves (e,δ)-differential privacy. Furthermore, we introduce the concept of directional noise made possible by the design of the MVG mechanism. Directional noise allows the impact of the noise on the utility of the matrix-valued query function to be moderated. Finally, we experimentally demonstrate the performance of our mechanism using three matrix-valued queries on three privacy-sensitive datasets. We find that the MVG mechanism can notably outperforms four previous state-of-the-art approaches, and provides comparable utility to the non-private baseline.", "This paper proves that an \"old dog\", namely -- the classical Johnson-Lindenstrauss transform, \"performs new tricks\" -- it gives a novel way of preserving differential privacy. We show that if we take two databases, @math and @math , such that (i) @math is a rank-1 matrix of bounded norm and (ii) all singular values of @math and @math are sufficiently large, then multiplying either @math or @math with a vector of iid normal Gaussians yields two statistically close distributions in the sense of differential privacy. Furthermore, a small, deterministic and alteration of the input is enough to assert that all singular values of @math are large. We apply the Johnson-Lindenstrauss transform to the task of approximating cut-queries: the number of edges crossing a @math -cut in a graph. We show that the JL transform allows us to that preserves edge differential privacy (where two graphs are neighbors if they differ on a single edge) while adding only @math random noise to any given query (w.h.p). Comparing the additive noise of our algorithm to existing algorithms for answering cut-queries in a differentially private manner, we outperform all others on small cuts ( @math ). We also apply our technique to the task of estimating the variance of a given matrix in any given direction. The JL transform allows us to that preserves differential privacy w.r.t bounded changes (each row in the matrix can change by at most a norm-1 vector) while adding random noise of magnitude independent of the size of the matrix (w.h.p). In contrast, existing algorithms introduce an error which depends on the matrix dimensions." ] }
1901.09858
2913816046
Differential privacy mechanisms that also make reconstruction of the data impossible come at a cost - a decrease in utility. In this paper, we tackle this problem by designing a private data release mechanism that makes reconstruction of the original data impossible and also preserves utility for a wide range of machine learning algorithms. We do so by combining the Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace distribution. While the JL transform can itself provide privacy guarantees blocki2012johnson and make reconstruction impossible, we do not rely on its differential privacy properties and only utilize its ability to make reconstruction impossible. We present novel proofs to show that our mechanism is differentially private under single element changes as well as single row changes to any database. In order to show utility, we prove that our mechanism maintains pairwise distances between points in expectation and also show that its variance is proportional to the dimensionality of the subspace we project the data into. Finally, we experimentally show the utility of our mechanism by deploying it on the task of clustering.
@cite_17 showed that the JL transform itself preserved differential privacy and provided utility guarantees in the strict case when only the covariance matrix is released. However, in order to provide privacy guarantees, the data matrix was required to be full rank with eigenvalues above some threshold. Since this is not always feasible in practice, they provided a work around which perturbed all the singular values of the data matrix. In practice, this magnitude of this perturbation can be orders of magnitude larger than the attribute values, hence causing general machine learning algorithms to have extremely poor performance. Along similar lines of using multiplicative random projections to preserve privacy for special problems is the work of @cite_26 who showed that multiplicative random projection methods preserved utility in the case of doing PCA.
{ "cite_N": [ "@cite_26", "@cite_17" ], "mid": [ "2951231886", "2949485285", "1988351624", "2160553465" ], "abstract": [ "This work studies formal utility and privacy guarantees for a simple multiplicative database transformation, where the data are compressed by a random linear or affine transformation, reducing the number of data records substantially, while preserving the number of original input variables. We provide an analysis framework inspired by a recent concept known as differential privacy (Dwork 06). Our goal is to show that, despite the general difficulty of achieving the differential privacy guarantee, it is possible to publish synthetic data that are useful for a number of common statistical learning applications. This includes high dimensional sparse regression ( 07), principal component analysis (PCA), and other statistical measures ( 06) based on the covariance of the initial data.", "This paper proves that an \"old dog\", namely -- the classical Johnson-Lindenstrauss transform, \"performs new tricks\" -- it gives a novel way of preserving differential privacy. We show that if we take two databases, @math and @math , such that (i) @math is a rank-1 matrix of bounded norm and (ii) all singular values of @math and @math are sufficiently large, then multiplying either @math or @math with a vector of iid normal Gaussians yields two statistically close distributions in the sense of differential privacy. Furthermore, a small, deterministic and alteration of the input is enough to assert that all singular values of @math are large. We apply the Johnson-Lindenstrauss transform to the task of approximating cut-queries: the number of edges crossing a @math -cut in a graph. We show that the JL transform allows us to that preserves edge differential privacy (where two graphs are neighbors if they differ on a single edge) while adding only @math random noise to any given query (w.h.p). Comparing the additive noise of our algorithm to existing algorithms for answering cut-queries in a differentially private manner, we outperform all others on small cuts ( @math ). We also apply our technique to the task of estimating the variance of a given matrix in any given direction. The JL transform allows us to that preserves differential privacy w.r.t bounded changes (each row in the matrix can change by at most a norm-1 vector) while adding random noise of magnitude independent of the size of the matrix (w.h.p). In contrast, existing algorithms introduce an error which depends on the matrix dimensions.", "We discuss a new robust convergence analysis of the well-known subspace iteration algorithm for computing the dominant singular vectors of a matrix, also known as simultaneous iteration or power method. The result characterizes the convergence behavior of the algorithm when a large amount noise is introduced after each matrix-vector multiplication. While interesting in its own right, the main motivation comes from the problem of privacy-preserving spectral analysis where noise is added in order to achieve the privacy guarantee known as differential privacy. This result leads to nearly tight worst-case bounds for the problem of computing a differentially private low-rank approximation in the spectral norm. Our results extend to privacy-preserving principal component analysis. We obtain improvements for several variants of differential privacy that have been considered. The running time of our algorithm is nearly linear in the input sparsity leading to strong improvements in running time over previous work. Complementing our worst-case bounds, we show that the error dependence of our algorithm on the matrix dimension can be replaced by a tight dependence on the coherence of the matrix. This parameter is always bounded by the matrix dimension but often much smaller. Indeed, the assumption of low coherence is essential in several machine learning and signal processing applications.", "This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications." ] }
1901.09735
2914576932
Rapid progress in genomics has enabled a thriving market for direct-to-consumer' genetic testing, whereby people have access to their genetic information without the involvement of a healthcare provider. Companies like 23andMe and AncestryDNA, which provide affordable health, genealogy, and ancestry reports, have already tested tens of millions of customers. At the same time, alas, far-right groups have also taken an interest in genetic testing, using them to attack minorities and prove their genetic purity.' However, the relation between genetic testing and online hate has not really been studied by the scientific community. To address this gap, we present a measurement study shedding light on how genetic testing is discussed on Web communities in Reddit and 4chan. We collect 1.3M comments posted over 27 months using a set of 280 keywords related to genetic testing. We then use Latent Dirichlet Allocation, Google's Perspective API, Perceptual Hashing, and word embeddings to identify trends, themes, and topics of discussion. Our analysis shows that genetic testing is discussed frequently on Reddit and 4chan, and often includes highly toxic language expressed through hateful, racist, and misogynistic comments. In particular, on 4chan's politically incorrect board ( pol ), content from genetic testing conversations involves several alt-right personalities and openly antisemitic memes. Finally, we find that genetic testing appears in a few unexpected contexts, and that users seem to build groups ranging from technology enthusiasts to communities using it to promote fringe political views.
@cite_3 conduct a meta-analysis of 53 studies involving 47K people around perceptions of genetic privacy, highlighting how survey questions are often phrased poorly, thus leading to possible misinterpretations of the results. They also show that not enough attention was paid to influential factors, e.g., participants' sociocultural backgrounds. Overall, research in this area mostly relies on qualitative studies examining the societ al effects of genetic testing @cite_65 @cite_46 @cite_65 @cite_46 @cite_30 @cite_33 @cite_35 and lacks quantitative large-scale measurements. To the best of our knowledge ours is the first large-scale, quantitative measurement study, using Reddit and 4chan. We examine trends, themes, and topics of discussion around genetic testing, and explore how communities related to the alt-right exploit genetic testing for sinister purposes.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_65", "@cite_3", "@cite_46" ], "mid": [ "2898683213", "1588965447", "2330546669", "1977417830" ], "abstract": [ "Concerns about genetic privacy affect individuals’ willingness to accept genetic testing in clinical care and to participate in genomics research. To learn what is already known about these views, we conducted a systematic review, which ultimately analyzed 53 studies involving the perspectives of 47,974 participants on real or hypothetical privacy issues related to human genetic data. Bibliographic databases included MEDLINE, Web of Knowledge, and Sociological Abstracts. Three investigators independently screened studies against predetermined criteria and assessed risk of bias. The picture of genetic privacy that emerges from this systematic literature review is complex and riddled with gaps. When asked specifically “are you worried about genetic privacy,” the general public, patients, and professionals frequently said yes. In many cases, however, that question was posed poorly or only in the most general terms. While many participants expressed concern that genomic and medical information would be revealed to others, respondents frequently seemed to conflate privacy, confidentiality, control, and security. People varied widely in how much control they wanted over the use of data. They were more concerned about use by employers, insurers, and the government than they were about researchers and commercial entities. In addition, people are often willing to give up some privacy to obtain other goods. Importantly, little attention was paid to understanding the factors–sociocultural, relational, and media—that influence people’s opinions and decisions. Future investigations should explore in greater depth which concerns about genetic privacy are most salient to people and the social forces and contexts that influence those perceptions. It is also critical to identify the social practices that will make the collection and use of these data more trustworthy for participants as well as to identify the circumstances that lead people to set aside worries and decide to participate in research.", "To describe consumers' perceptions of genetic counseling services in the context of direct-to-consumer personal genomic testing is the purpose of this research. Utilizing data from the Scripps Genomic Health Initiative, we assessed direct-to-consumer genomic test consumers' utilization and perceptions of genetic counseling services. At long-term follow-up, approximately 14 months post-testing, participants were asked to respond to several items gauging their interactions, if any, with a Navigenics genetic counselor, and their perceptions of those interactions. Out of 1325 individuals who completed long-term follow-up, 187 (14.1 ) indicated that they had spoken with a genetic counselor. The most commonly given reason for not utilizing the counseling service was a lack of need due to the perception of already understanding one's results (55.6 ). The most common reasons for utilizing the service included wanting to take advantage of a free service (43.9 ) and wanting more information on risk calculations (42.2 ). Among those who utilized the service, a large fraction reported that counseling improved their understanding of their results (54.5 ) and genetics in general (43.9 ). A relatively small proportion of participants utilized genetic counseling after direct-to-consumer personal genomic testing. Among those individuals who did utilize the service, however, a large fraction perceived it to be informative, and thus presumably beneficial.", "The disclosure of individual genetic results has generated an ongoing debate about which rules should be followed. We aimed to identify factors related to research participants' preferences about learning the results of genomic studies using their donated tissue samples. We conducted a cross-sectional survey of 279 patients from the United States and Spain who had volunteered to donate a sample for genomic research. Our results show that 48 of research participants would like to be informed about all individual results from future genomic studies using their donated tissue, especially those from the U.S. (71.4 ) and those believing that genetic information poses special risks (69.7 ). In addition, 16 of research participants considered genetic information to be riskier than other types of personal medical data. In conclusion, our study demonstrates that a high proportion of participants prefer to be informed about their individual results and that there is a higher preference among those research subjec...", "Purpose: The impact of laws restricting health insurers' use of genetic information has been assessed from two main vantage points: (1) whether they reduce the extent of genetic discrimination and (2) whether they reduce the fear of discrimination and the resulting deterrence to undergo genetic testing. A previous report from this study concluded that there are almost no well-documented cases of health insurers either asking for or using presymptomatic genetic test results in their underwriting decisions, either before or after these laws, or in states with or without these laws. This report evaluates the perceptions and the resulting behavior by patients and clinicians. Methods: A comparative case study analysis was performed in seven states with different laws respecting health insurers' use of genetic information (no law, new prohibition, mature prohibition). Semistructured interviews were conducted in person with five patient advocates and with 30 experienced genetic counselors or medical geneticists, most of whom deal with adult-onset disorders. Also, multiple informed consent forms and patient information brochures were collected and analyzed using qualitative methods. Results: Patients' and clinicians' fear of genetic discrimination greatly exceeds reality, at least for health insurance. It is uncertain how much this fear actually deters genetic testing. The greatest deterrence is to those who do not want to submit the costs of testing for reimbursement and who cannot afford to pay for testing. There appears to be little deterrence for tests that are more easily affordable or when the need for the information is much greater. Fear of discrimination plays virtually no role in testing decisions in pediatric or prenatal situations, but is significant for adult-onset genetic conditions. Conclusion: Existing laws have not greatly reduced the fear of discrimination. This may be due, in part, to clinicians' lack of confidence that these laws can prevent discrimination until there are test cases of actual enforcement. Ironically, there may be so little actual discrimination that it may not be possible to initiate good test cases." ] }
1901.09697
2966439069
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
As machine learning applications become more and more common, various vulnerabilities and attacks on ML models get discovered (for example, model inversion @cite_35 and membership inference @cite_31 ), raising the need of developing matching defences.
{ "cite_N": [ "@cite_35", "@cite_31" ], "mid": [ "2051267297", "2461943168", "2969695741", "2802314446" ], "abstract": [ "Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by , adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility.", "Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (\"predictive analytics\") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., \"steal\") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.", "Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.", "As machine learning (ML) applications become increasingly prevalent, protecting the confidentiality of ML models becomes paramount for two reasons: (a) models may constitute a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can be used to evade classification by the original model. One way to protect model confidentiality is to limit access to the model only via well-defined prediction APIs. This is common not only in machine-learning-as-a-service (MLaaS) settings where the model is remote, but also in scenarios like autonomous driving where the model is local but direct access to it is protected, for example, by hardware security mechanisms. Nevertheless, prediction APIs still leak information so that it is possible to mount model extraction attacks by an adversary who repeatedly queries the model via the prediction API. In this paper, we describe a new model extraction attack by combining a novel approach for generating synthetic queries together with recent advances in training deep neural networks. This attack outperforms state-of-the-art model extraction techniques in terms of transferability of targeted adversarial examples generated using the extracted model (+15-30 percentage points, pp), and in prediction accuracy (+15-20 pp) on two datasets. We then propose the first generic approach to effectively detect model extraction attacks: PRADA. It analyzes how the distribution of consecutive queries to the model evolves over time and raises an alarm when there are abrupt deviations. We show that PRADA can detect all known model extraction attacks with a 100 success rate and no false positives. PRADA is particularly suited for detecting extraction attacks against local models." ] }
1901.09697
2966439069
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
Differential privacy @cite_16 @cite_36 is one of the strongest privacy standards that can be employed to protect ML models from these and other attacks. Since pure DP is hard to achieve in many realistic complex learning tasks, a notion of approximate @math -DP is used across-the-board in machine learning. It is often achieved as a result of applying the Gaussian noise mechanism @cite_33 . Lately, several alternative notions and relaxations of DP have been proposed, such as concentrated DP (CDP) @cite_20 @cite_23 @cite_29 and R 'enyi DP (RDP) @cite_24 , allowing for easier privacy analysis.
{ "cite_N": [ "@cite_33", "@cite_36", "@cite_29", "@cite_24", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "2099259603", "2949611647", "2962958653", "2071511328" ], "abstract": [ "Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. But despite much recent work, optimal strategies for answering a collection of related queries are not known. We propose the matrix mechanism, a new algorithm for answering a workload of predicate counting queries. Given a workload, the mechanism requests answers to a different set of queries, called a query strategy, which are answered using the standard Laplace mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two stage process can result in a more complex correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show this problem can be formulated as a rank-constrained semidefinite program. Finally, we analyze two seemingly distinct techniques, whose similar behavior is explained by viewing them as instances of the matrix mechanism.", "We compare the sample complexity of private learning [ 2008] and sanitization [ 2008] under pure @math -differential privacy [ TCC 2006] and approximate @math -differential privacy [ Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generalizes some of our considered tasks. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm solving such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of label-private learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy.", "In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by [Lecture Notes in Comput. Sci. 3876, Springer, New York, 2006, pp. 265--284] that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private probably approximately correct (PAC) and agnostic learning was studied in a number of prior works starting with [SIAM J. Comput., 40 (2011), pp. 793--826]. However, a number of basic questions remain open [A. Beimel, S. P. Kasiviswanathan, and K. Nissim, Lecture Notes in Comput. Sci. 5978, Springer, New York, 2006, pp. 437--454; K. Chaudhuri and D. Hsu, Proceedings of Conference in Learning Theory, 2011, pp. 155--186; A. Beimel, K. Nissim, and U. Stemmer, Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, 2013, pp. 9...", "Differential privacy has emerged as one of the most promising privacy models for private data release. It can be used to release different types of data, and, in particular, histograms, which provide useful summaries of a dataset. Several differentially private histogram releasing schemes have been proposed recently. However, most of them directly add noise to the histogram counts, resulting in undesirable accuracy. In this paper, we propose two sanitization techniques that exploit the inherent redundancy of real-life datasets in order to boost the accuracy of histograms. They lossily compress the data and sanitize the compressed data. Our first scheme is an optimization of the Fourier Perturbation Algorithm (FPA) presented in RN10 . It improves the accuracy of the initial FPA by a factor of 10. The other scheme relies on clustering and exploits the redundancy between bins. Our extensive experimental evaluation over various real-life and synthetic datasets demonstrates that our techniques preserve very accurate distributions and considerably improve the accuracy of range queries over attributed histograms." ] }
1901.09697
2966439069
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
Privacy analysis in the context of differentially private ML is often done and lies in finding parameters @math (or bounds on it) that apply to the entire learning process, as opposed to fixing @math beforehand and calibrating noise to satisfy it. Due to the nature of such analysis (keeping track and accumulating some quantity representing privacy loss during training) it is referred to as . The simplest accounting can be done by using basic and advanced composition theorems @cite_33 . However, bounds on @math obtained this way are prohibitively loose: using basic composition for big neural networks, @math can be in order of millions, so the DP guarantee loses any meaning.
{ "cite_N": [ "@cite_33" ], "mid": [ "1603362050", "2962958653", "2949611647", "2154086287" ], "abstract": [ "In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by (2006) that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private PAC and agnostic learning was studied in a number of prior works starting with (, 2008) but a number of basic questions still remain open, most notably whether learning with privacy requires more samples than learning without privacy. We show that the sample complexity of learning with (pure) differential privacy can be arbitrarily higher than the sample complexity of learning without the privacy constraint or the sample complexity of learning with approximate differential privacy. Our second contribution and the main tool is an equivalence between the sample complexity of (pure) differentially private learning of a concept class @math (or @math ) and the randomized one-way communication complexity of the evaluation problem for concepts from @math . Using this equivalence we prove the following bounds: 1. @math , where @math is the Littlestone's (1987) dimension characterizing the number of mistakes in the online-mistake-bound learning model. Known bounds on @math then imply that @math can be much higher than the VC-dimension of @math . 2. For any @math , there exists a class @math such that @math but @math . 3. For any @math , there exists a class @math such that the sample complexity of (pure) @math -differentially private PAC learning is @math but the sample complexity of the relaxed @math -differentially private PAC learning is @math . This resolves an open problem of (2013b).", "In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by [Lecture Notes in Comput. Sci. 3876, Springer, New York, 2006, pp. 265--284] that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private probably approximately correct (PAC) and agnostic learning was studied in a number of prior works starting with [SIAM J. Comput., 40 (2011), pp. 793--826]. However, a number of basic questions remain open [A. Beimel, S. P. Kasiviswanathan, and K. Nissim, Lecture Notes in Comput. Sci. 5978, Springer, New York, 2006, pp. 437--454; K. Chaudhuri and D. Hsu, Proceedings of Conference in Learning Theory, 2011, pp. 155--186; A. Beimel, K. Nissim, and U. Stemmer, Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, 2013, pp. 9...", "We compare the sample complexity of private learning [ 2008] and sanitization [ 2008] under pure @math -differential privacy [ TCC 2006] and approximate @math -differential privacy [ Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generalizes some of our considered tasks. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm solving such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of label-private learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy.", "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism M* -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user u, no matter what its side information and preferences, derives as much utility from M* as from interacting with a differentially private mechanism Mu that is optimally tailored to u. More precisely, for every user u there is an optimal mechanism Mu for it that factors into a user-independent part (the geometric mechanism M*) followed by user-specific post-processing that can be delegated to the user itself. The first part of our proof of this result characterizes the optimal differentially private mechanism for a fixed but arbitrary user in terms of a certain basic feasible solution to a linear program with constraints that encode differential privacy. The second part shows that all of the relevant vertices of this polytope (ranging over all possible users) are derivable from the geometric mechanism via suitable remappings of its range." ] }
1901.09697
2966439069
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
Apart from sharp bounds, moments accountant is attractive because it operates within the classical notion of @math -DP. Alternative notions of DP also provide tight composition theorems, along with some other advantages, but to the best of our knowledge, are not broadly used in practice compared to traditional DP (although there are some examples @cite_1 ). One of the possible reasons for that is interpretability: parameters of @math -RDP or @math -CDP are hard to interpret and hard to explain to a person without significant background in the area. At the same time, @math and @math can be easily understood intuitively, even though with some simplifications. Our goal in this work is to remain within a well-understood concept of @math -DP, operate in a simple way similar to moments accountant, but improve the sharpness of its bounds and extend it to a broader range of privacy mechanisms within machine learning context.
{ "cite_N": [ "@cite_1" ], "mid": [ "2950890626", "2007385254", "2109910161", "2952621790" ], "abstract": [ "Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with @math variables and @math clauses, there is a value of @math beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when @math ). Intuitively, strong refutation should become easier as the clause density @math grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as @math -SAT and @math -XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, @math , and the clause density at which instances become unsatisfiable with high probability, @math . In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random @math -XOR instances with clause density @math in time @math or in @math rounds of the sum-of-squares hierarchy, for any @math and any integer @math . Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at @math , and brute-force refutation at the satisfiability threshold when @math . We also leverage our @math -XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. Additionally, we extend our techniques to give new results for certifying upper bounds on the injective tensor norm of random tensors.", "We initiate a study of when the value of mathematical relaxations such as linear and semi-definite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a sub-instance induced by a small random subsample of the variables. Let C be a family of CSPs such as 3SAT, Max-Cut, etc., and let Π be a mathematical program that is a relaxation for C, in the sense that for every instance P ∈ C, Π(P) is a number in [0, 1] upper bounding the maximum fraction of satisfiable constraints of P. Loosely speaking, we say that subsampling holds for C and Π if for every sufficiently dense instance P ∈ C and every e > 0, if we let P' be the instance obtained by restricting P to a sufficiently large constant number of variables, then Π(P') ∈ (1 ± e)Π(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P') = 1 − θ(γ) whenever Π(P) = 1 − γ, where θ hides only absolute constants. We obtain both positive and negative results, showing that: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the semi-definite program considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every constraint-satisfaction problem under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of unique games type. 3. There are non-unique CSPs for which even weak subsampling fails for the above tighter semi-definite programs. Also there are unique CSPs for which (even weak) subsampling fails for the Sherali-Adams linear programming hierarchy. As a corollary of our weak subsampling for strong semi-definite programs, we obtain a polynomial-time algorithm to certify that random geometric graphs (of the type considered by Feige and Schechtman, 2002) of max-cut value 1 − γ have a cut value at most 1 − γ 10. More generally, our results give an approach to obtaining average-case algorithms for CSPs using semi-definite programming hierarchies.", "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "Let @math be a nontrivial @math -ary predicate. Consider a random instance of the constraint satisfaction problem @math on @math variables with @math constraints, each being @math applied to @math randomly chosen literals. Provided the constraint density satisfies @math , such an instance is unsatisfiable with high probability. The problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate @math supports a @math - probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree @math (which runs in time @math ) refute a random instance of @math . In particular, the polynomial-time SOS algorithm requires @math constraints to refute random instances of CSP @math when @math supports a @math -wise uniform distribution on its satisfying assignments. Together with recent work of [LRS15], our result also implies that polynomial-size semidefinite programming relaxation for refutation requires at least @math constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate @math , they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of [AOW15] and [RRS16], this full three-way tradeoff is , up to lower-order factors." ] }
1901.09697
2966439069
We propose Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, that provides sharper privacy guarantees in difficult scenarios, such as deep learning. We also derive a general privacy accounting method for iterative learning algorithms under Bayesian differential privacy and show that it is a generalisation of the well-known moments accountant. Our experiments demonstrate significant advantage over the state-of-the-art differential privacy bounds for deep learning models on classic supervised learning tasks, bringing the privacy budget from 8 down to 0.5 in some cases. Lower amounts of injected noise also benefit the model accuracy and the speed of learning. Additionally, we demonstrate applicability of Bayesian differential privacy to variational inference and achieve the state-of-the-art privacy-accuracy trade-off.
We evaluate our method on two popular classes of learning algorithms: deep neural networks and variational inference (VI). Privacy-preserving deep learning is now extensively studied, and is frequently used in combination with moments accountant @cite_27 @cite_25 @cite_28 , which makes it a perfect setting for comparison. Bayesian inference methods, on the other hand, receive less attention from the private learning community. There are, nonetheless, very interesting results suggesting one could obtain DP guarantees "for free" (without adding noise) in some methods like posterior sampling @cite_8 @cite_2 and stochastic gradient Monte Carlo @cite_7 . A differentially private version of variational inference, obtained by applying noise to the gradients and using moments accountant, has also been proposed @cite_3 . We show that with our accountant it is possible to build VI that is both highly accurate and differentially private by sampling from variational distribution.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_28", "@cite_3", "@cite_27", "@cite_2", "@cite_25" ], "mid": [ "2547253982", "1886087434", "2539938672", "2787512446" ], "abstract": [ "We provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the variational posterior distributions simply by perturbing the expected sufficient statistics of the complete-data likelihood. For widely used non-CE models with binomial likelihoods, we exploit the P 'o lya-Gamma data augmentation scheme to bring such models into the CE family, such that inferences in the modified model resemble the private variational Bayes algorithm as closely as possible. The iterative nature of variational Bayes presents a further challenge since iterations increase the amount of noise needed. We overcome this by combining: (1) a relaxed notion of differential privacy, called concentrated differential privacy, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation, Bayesian logistic regression, and sigmoid belief networks, evaluated on real-world datasets.", "We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to \"differential privacy\", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private \"for free\"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an \"anytime\" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.", "Many machine learning applications are based on data collected from people, such as their tastes and behaviour as well as biological traits and genetic data. Regardless of how important the application might be, one has to make sure individuals' identities or the privacy of the data are not compromised in the analysis. Differential privacy constitutes a powerful framework that prevents breaching of data subject privacy from the output of a computation. Differentially private versions of many important Bayesian inference methods have been proposed, but there is a lack of an efficient unified approach applicable to arbitrary models. In this contribution, we propose a differentially private variational inference method with a very wide applicability. It is built on top of doubly stochastic variational inference, a recent advance which provides a variational solution to a large class of models. We add differential privacy into doubly stochastic variational inference by clipping and perturbing the gradients. The algorithm is made more efficient through privacy amplification from subsampling. We demonstrate the method can reach an accuracy close to non-private level under reasonably strong privacy guarantees, clearly improving over previous sampling-based alternatives especially in the strong privacy regime.", "We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements." ] }
1901.09681
2913663242
We study the problem of identifying different behaviors occurring in different parts of a large heterogenous network. We zoom in to the network using lenses of different sizes to capture the local structure of the network. These network signatures are then weighted to provide a set of predicted labels for every node. We achieve a peak accuracy of @math (random= @math ) on two networks with @math and @math nodes each. Further, we perform better than random even when the given node is connected to up to 5 different types of networks. Finally, we perform this analysis on homogeneous networks and show that highly structured networks have high homogeneity.
The idea of using the image embedding of the adjacency matrix as a feature was first introduced in @cite_6 . Based on this idea, authors in @cite_7 showed with great success that parent networks of tiny subgraphs (as small as 8 nodes) can be identified. They also used Caffe @cite_8 to show that the structured image embedding features can be used for classification in a transfer learning setting. In this work, we use the idea to create a that can be used on heterogeneous networks to the different exhibited in different parts of a network.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_8" ], "mid": [ "2786815098", "2883311563", "2554819193", "2768104274" ], "abstract": [ "We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from multiple datasets are that 1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, 2. pure transfer learning works effectively with minimum interference from the user and is robust against small data.", "Matching images and sentences demands a fine understanding of both modalities. In this paper, we propose a new system to discriminatively embed the image and text to a shared visual-textual space. In this field, most existing works apply the ranking loss to pull the positive image text pairs close and push the negative pairs apart from each other. However, directly deploying the ranking loss is hard for network learning, since it starts from the two heterogeneous features to build inter-modal relationship. To address this problem, we propose the instance loss which explicitly considers the intra-modal data distribution. It is based on an unsupervised assumption that each image text group can be viewed as a class. So the network can learn the fine granularity from every image text group. The experiment shows that the instance loss offers better weight initialization for the ranking loss, so that more discriminative embeddings can be learned. Besides, existing works usually apply the off-the-shelf features, i.e., word2vec and fixed visual feature. So in a minor contribution, this paper constructs an end-to-end dual-path convolutional network to learn the image and text representations. End-to-end learning allows the system to directly learn from the data and fully utilize the supervision. On two generic retrieval datasets (Flickr30k and MSCOCO), experiments demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language based person retrieval, we improve the state of the art by a large margin. The code has been made publicly available.", "We study a natural problem: Given a small piece of a large parent network, is it possible to identify the parent network? We approach this problem from two perspectives. First, using several \"sophisticated\" or \"classical\" network features that have been developed over decades of social network study. These features measure aggregate properties of the network and have been found to take on distinctive values for different types of network, at the large scale. By using these classical features within a standard machine learning framework, we show that one can identify large parent networks from small (even 8-node) subgraphs. Second, we present a novel adjacency matrix embedding technique which converts the small piece of the network into an image and, within a deep learning framework, we are able to obtain prediction accuracies upward of 80 , which is comparable to or slightly better than the performance from classical features. Our approach provides a new tool for topology-based prediction which may be of interest in other network settings. Our approach is plug and play, and can be used by non-domain experts. It is an appealing alternative to the often arduous task of creating domain specific features using domain expertise.", "Learning low-dimensional representations of networks has proved effective in a variety of tasks such as node classification, link prediction and network visualization. Existing methods can effectively encode different structural properties into the representations, such as neighborhood connectivity patterns, global structural role similarities and other high-order proximities. However, except for objectives to capture network structural properties, most of them suffer from lack of additional constraints for enhancing the robustness of representations. In this paper, we aim to exploit the strengths of generative adversarial networks in capturing latent features, and investigate its contribution in learning stable and robust graph representations. Specifically, we propose an Adversarial Network Embedding (ANE) framework, which leverages the adversarial learning principle to regularize the representation learning. It consists of two components, i.e., a structure preserving component and an adversarial learning component. The former component aims to capture network structural properties, while the latter contributes to learning robust representations by matching the posterior distribution of the latent representations to given priors. As shown by the empirical results, our method is competitive with or superior to state-of-the-art approaches on benchmark network embedding tasks." ] }
1901.09608
2913504012
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
Early work around autonomous sensing of physical phenomena involved ground-based mobile robots @cite_26 @cite_30 @cite_40 . More recently, with the emergence of reasonably robust Unmanned Aerial Vehicle (UAV) platforms, often referred to as drones, they are being used as sensing platforms with benefits in terms of speed, manoeuvrability and ability to deal with hostile terrains, unobstructed by objects on the ground @cite_10 @cite_25 . UAVs bring their own challenges, such as reduced on-board power and the difficulty of finding sensors that fit within the form factor. Sensing technology has also continued to develop, e.g., making it possible to use spectrometers on UAVs @cite_19 . We conduct experiments with a commercial off-the-shelf CO @math sensor, but note that the computational methods presented here are sensor agnostic, assuming only that the sensor obtains point measurements from a scalar field.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_19", "@cite_40", "@cite_10", "@cite_25" ], "mid": [ "2774492100", "2582222835", "1573725801", "2049492608" ], "abstract": [ "Mobile wireless sensor networks have been extensively deployed for enhancing environmental monitoring and surveillance. The availability of low-cost mobile robots equipped with a variety of sensors makes them promising in target coverage tasks. They are particularly suitable where quick, inexpensive, or nonlasting visual sensing solutions are required. In this paper, we consider the problem of low complexity target tracking to cover and follow moving targets using flying robots. We tackle this problem by clustering targets while estimating the camera location and orientation for each cluster separately through a cover-set coverage method. We also leverage partial knowledge of target mobility to enhance the efficiency of our proposed algorithms. Three computationally efficient approaches are developed: predictive fuzzy , predictive incremental fuzzy , and local incremental fuzzy . The objective is to find a compromise among coverage efficiency, traveled distance, number of drones required, and complexity. The targets move according to one of the following three possible mobility patterns: random waypoint, Manhattan grid, and reference point group mobility patterns. The feasibility of our algorithms and their performance are also tested on a real-world indoor testbed called drone-be-gone , using Parrot AR.Drone quadcopters. The deployment confirms the results obtained with simulations and highlights the suitability of the proposed solutions for real-time applications.", "During last decade the scientific research on Unmanned Aerial Vehicless (UAVs) increased spectacularly and led to the design of multiple types of aerial platforms. The major challenge today is the development of autonomously operating aerial agents capable of completing missions independently of human interaction. To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills. The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends. These applications are sorted in different categories according to the research topics among various research groups. More specifically vision based position-attitude control, pose estimation and mapping, obstacle detection as well as target tracking are the identified components towards autonomous agents. Aerial platforms could reach greater level of autonomy by integrating all these technologies onboard. Additionally, throughout this article the concept of fusion multiple sensors is highlighted, while an overview on the challenges addressed and future trends in autonomous agent development will be also provided.", "We develop a computationally efficient control policy for active perception that incorporates explicit models of sensing and mobility to build 3D maps with ground and aerial robots. Like previous work, our policy maximizes an information-theoretic objective function between the discrete occupancy belief distribution (e.g., voxel grid) and future measurements that can be made by mobile sensors. However, our work is unique in three ways. First, we show that by using Cauchy-Schwarz Quadratic Mutual Information (CSQMI), we get significant gains in efficiency. Second, while most previous methods adopt a myopic, gradient-following approach that yields poor convergence properties, our algorithm searches over a set of paths and is less susceptible to local minima. In doing so, we explicitly incorporate models of sensors, and model the dependence (and independence) of measurements over multiple time steps in a path. Third, because we consider models of sensing and mobility, our method naturally applies to both ground and aerial vehicles. The paper describes the basic models, the problem formulation and the algorithm, and demonstrates applications via simulation and experimentation.", "Remote sensing by Unmanned Aerial Vehicles (UAVs) is changing the way agriculture operates by increasing the spatial-temporal resolution of data collection. Micro-UAVs have the potential to further improve and enrich the data collected by operating close to the crops, enabling the collection of higher spatio-temporal resolution data. In this paper, we present a UAV-mounted measurement system that utilizes a laser scanner to compute crop heights, a critical indicator of crop health. The system filters, transforms, and analyzes the cluttered range data in real-time to determine the distance to the ground and to the top of the crops. We assess the system in an indoor testbed and in a corn field. Our findings indicate that despite the dense canopy and highly variable sensor readings, we can precisely fly over crops and measure its height to within 5cm of measurements gathered using current measurement technology." ] }
1901.09608
2913504012
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
Models provide numerous advantages in machine learning @cite_11 , enabling inferences from limited data, and in planning @cite_29 , enabling counter-factual reasoning @cite_12 and guided search. However, defining the structure of models in a way that leads to efficient inference while maintaining fidelity to complex arrangements of physical causes tends to be non-trivial.
{ "cite_N": [ "@cite_29", "@cite_12", "@cite_11" ], "mid": [ "2420245003", "2282821441", "2802314446", "2951501516" ], "abstract": [ "Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as black-box functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "As machine learning (ML) applications become increasingly prevalent, protecting the confidentiality of ML models becomes paramount for two reasons: (a) models may constitute a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can be used to evade classification by the original model. One way to protect model confidentiality is to limit access to the model only via well-defined prediction APIs. This is common not only in machine-learning-as-a-service (MLaaS) settings where the model is remote, but also in scenarios like autonomous driving where the model is local but direct access to it is protected, for example, by hardware security mechanisms. Nevertheless, prediction APIs still leak information so that it is possible to mount model extraction attacks by an adversary who repeatedly queries the model via the prediction API. In this paper, we describe a new model extraction attack by combining a novel approach for generating synthetic queries together with recent advances in training deep neural networks. This attack outperforms state-of-the-art model extraction techniques in terms of transferability of targeted adversarial examples generated using the extracted model (+15-30 percentage points, pp), and in prediction accuracy (+15-20 pp) on two datasets. We then propose the first generic approach to effectively detect model extraction attacks: PRADA. It analyzes how the distribution of consecutive queries to the model evolves over time and raises an alarm when there are abrupt deviations. We show that PRADA can detect all known model extraction attacks with a 100 success rate and no false positives. PRADA is particularly suited for detecting extraction attacks against local models.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted." ] }
1901.09608
2913504012
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
The phenomena we consider in this paper involve gas flows. There is a long tradition of modelling such flows, including efficient computational methods aimed at graphics and animation applications @cite_42 . The development of efficient solvers is also driven in the engineering community by the need to simulate phenomena such as fluid-structure interaction, yielding fast and approximate solvers through position-based dynamics methods @cite_18 @cite_31 . Simulation frameworks have also been developed aimed specifically at easing the development and testing of GDM and GSL algorithms @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_42", "@cite_31", "@cite_21" ], "mid": [ "2515505748", "2041166905", "24392662", "2118972993" ], "abstract": [ "In aerodynamics related design, analysis and optimization problems, flow fields are simulated using computational fluid dynamics (CFD) solvers. However, CFD simulation is usually a computationally expensive, memory demanding and time consuming iterative process. These drawbacks of CFD limit opportunities for design space exploration and forbid interactive design. We propose a general and flexible approximation model for real-time prediction of non-uniform steady laminar flow in a 2D or 3D domain based on convolutional neural networks (CNNs). We explored alternatives for the geometry representation and the network architecture of CNNs. We show that convolutional neural networks can estimate the velocity field two orders of magnitude faster than a GPU-accelerated CFD solver and four orders of magnitude faster than a CPU-based CFD solver at a cost of a low error rate. This approach can provide immediate feedback for real-time design iterations at the early stage of design. Compared with existing approximation models in the aerodynamics domain, CNNs enable an efficient estimation for the entire velocity field. Furthermore, designers and engineers can directly apply the CNN approximation model in their design space exploration algorithms without training extra lower-dimensional surrogate models.", "This study derives geometric, variational discretization of continuum theories arising in fluid dynamics, magnetohydrodynamics (MHD), and the dynamics of complex fluids. A central role in these discretizations is played by the geometric formulation of fluid dynamics, which views solutions to the governing equations for perfect fluid flow as geodesics on the group of volume-preserving diffeomorphisms of the fluid domain. Inspired by this framework, we construct a finite-dimensional approximation to the diffeomorphism group and its Lie algebra, thereby permitting a variational temporal discretization of geodesics on the spatially discretized diffeomorphism group. The extension to MHD and complex fluid flow is then made through an appeal to the theory of Euler–Poincare systems with advection, which provides a generalization of the variational formulation of ideal fluid flow to fluids with one or more advected parameters. Upon deriving a family of structured integrators for these systems, we test their performance via a numerical implementation of the update schemes on a cartesian grid. Among the hallmarks of these new numerical methods are exact preservation of momenta arising from symmetries, automatic satisfaction of solenoidal constraints on vector fields, good long-term energy behavior, robustness with respect to the spatial and temporal resolution of the discretization, and applicability to irregular meshes.", "In this paper we present a simple and rapid implementation of a fluid dynamics solver for game engines. Our tools can greatly enhance games by providing realistic fluid-like effects such as swirling smoke past a moving character. The potential applications are endless. Our algorithms are based on the physical equations of fluid flow, namely the Navier-Stokes equations. These equations are notoriously hard to solve when strict physical accuracy is of prime importance. Our solvers on the other hand are geared towards visual quality. Our emphasis is on stability and speed, which means that our simulations can be advanced with arbitrary time steps. We also demonstrate that our solvers are easy to code by providing a complete C code implementation in this paper. Our algorithms run in real-time for reasonable grid sizes in both two and three dimensions on standard PC hardware, as demonstrated during the presentation of this paper at the conference.", "Abstract Since the seminal work of [Sussman, M, Smereka P, Osher S. A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 1994;114:146–59] on coupling the level set method of [Osher S, Sethian J. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J Comput Phys 1988;79:12–49] to the equations for two-phase incompressible flow, there has been a great deal of interest in this area. That work demonstrated the most powerful aspects of the level set method, i.e. automatic handling of topological changes such as merging and pinching, as well as robust geometric information such as normals and curvature. Interestingly, this work also demonstrated the largest weakness of the level set method, i.e. mass or information loss characteristic of most Eulerian capturing techniques. In fact, [Sussman M, Smereka P, Osher S. A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 1994;114:146–59] introduced a partial differential equation for battling this weakness, without which their work would not have been possible. In this paper, we discuss both historical and most recent works focused on improving the computational accuracy of the level set method focusing in part on applications related to incompressible flow due to both of its popularity and stringent accuracy requirements. Thus, we discuss higher order accurate numerical methods such as Hamilton–Jacobi WENO [Jiang G-S, Peng D. Weighted ENO schemes for Hamilton–Jacobi equations. SIAM J Sci Comput 2000;21:2126–43], methods for maintaining a signed distance function, hybrid methods such as the particle level set method [Enright D, Fedkiw R, Ferziger J, Mitchell I. A hybrid particle level set method for improved interface capturing. J Comput Phys 2002;183:83–116] and the coupled level set volume of fluid method [Sussman M, Puckett EG. A coupled level set and volume-of-fluid method for computing 3d and axisymmetric incompressible two-phase flows. J Comput Phys 2000;162:301–37], and adaptive gridding techniques such as the octree approach to free surface flows proposed in [Losasso F, Gibou F, Fedkiw R. Simulating water and smoke with an octree data structure, ACM Trans Graph (SIGGRAPH Proc) 2004;23:457–62]." ] }
1901.09608
2913504012
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
In this paper, we utilise a reasonably accurate simulation of the phenomenon @cite_23 but exploit simplifications inherent to the problem, such as that the dispersion process can be modelled on the 2-d plane We observe that our approach is invariant to some degree of (small) noise, i.e., the situation of plain fields and gently rolling hills. Many realistic applications are indeed sited in such terrain, e.g., a petroleum refinery in the periphery of which one might wish to perform emissions monitoring. along which the point measurements are also being taken. Moreover, the process of dispersion is shift invariant @cite_36 , so that a single large simulation can be performed online, from which the flow patterns for different locations can be easily computed.
{ "cite_N": [ "@cite_36", "@cite_23" ], "mid": [ "2172188317", "2129152507", "2052094314", "2149550213" ], "abstract": [ "In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix. Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point. We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching resultss the image is described by a set of scale affine invariant descriptors computed on the regions associated with our points.", "Abstract This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop s on gpu hardware using single precision. The simulations use a vortex particle method to solve the Navier–Stokes equations, with a highly parallel fast multipole method ( fmm ) as numerical engine, and match the current record in mesh size for this application, a cube of 4096 3 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the fft algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the fmm -based vortex method achieving 74 parallel efficiency on 4096 processes (one gpu per mpi process, 3 gpu s per node of the tsubame -2.0 system). The fft -based spectral method is able to achieve just 14 parallel efficiency on the same number of mpi processes (using only cpu cores), due to the all-to-all communication pattern of the fft algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.", "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.", "A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery." ] }
1901.09608
2913504012
Sensors are routinely mounted on robots to acquire various forms of measurements in spatiotemporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength) and its effects on the observed distribution of gas. We develop algorithms for offline inference as well as for online path discovery via active sensing. We demonstrate the efficiency, accuracy, and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle mounted with a CO @math sensor to automatically seek out a gas cylinder emitting CO @math via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state-of-the-art baselines.
Another aspect of active sensing is the method for collecting samples so as to maximise a notion of information gain. While the underlying exploration-exploitation tradeoffs can be posed formally in decision theoretic terms, most practical techniques tend to be myopic in their operation. Gaussian processes le2009trajectory,ouyang2014multi and the Kernel DM+V W algorithm @cite_14 address this question. One could also formulate this as optimal design of sequential experiments @cite_13 . However, this requires access to analytically defined dynamics models which may be hard to construct for the specific scenario at hand. Notably, source term estimation was recently addressed with Bayesian estimation implemented using sequential Monte Carlo @cite_38 . By using parameterized Gaussian plume dispersion model recursive Bayesian updates can be performed accounting for uncertainty in wind, dispersion, etc. This was additionally implemented on a UAV @cite_41 , performing outdoor localization of gas leaks using predefined flying pattern and ground station for performing computations. We formulate active sensing with a fluid simulation in the loop and devise an efficient algorithm for simulation alignment.
{ "cite_N": [ "@cite_41", "@cite_38", "@cite_14", "@cite_13" ], "mid": [ "1573725801", "2900643257", "2163932785", "2962707519" ], "abstract": [ "We develop a computationally efficient control policy for active perception that incorporates explicit models of sensing and mobility to build 3D maps with ground and aerial robots. Like previous work, our policy maximizes an information-theoretic objective function between the discrete occupancy belief distribution (e.g., voxel grid) and future measurements that can be made by mobile sensors. However, our work is unique in three ways. First, we show that by using Cauchy-Schwarz Quadratic Mutual Information (CSQMI), we get significant gains in efficiency. Second, while most previous methods adopt a myopic, gradient-following approach that yields poor convergence properties, our algorithm searches over a set of paths and is less susceptible to local minima. In doing so, we explicitly incorporate models of sensors, and model the dependence (and independence) of measurements over multiple time steps in a path. Third, because we consider models of sensing and mobility, our method naturally applies to both ground and aerial vehicles. The paper describes the basic models, the problem formulation and the algorithm, and demonstrates applications via simulation and experimentation.", "Gaining information about an unknown gas source is a task of great importance with applications in several areas including: responding to gas leaks or suspicious smells, quantifying sources of emissions, or in an emergency response to an industrial accident or act of terrorism. In this paper, a method to estimate the source term of a gaseous release using measurements of concentration obtained from an unmanned aerial vehicle (UAV) is described. The source term parameters estimated include the three dimensional location of the release, its emission rate, and other important variables needed to forecast the spread of the gas using an atmospheric transport and dispersion model. The parameters of the source are estimated by fusing concentration observations from a gas detector on-board the aircraft, with meteorological data and an appropriate model of dispersion. Two models are compared in this paper, both derived from analytical solutions to the advection diffusion equation. Bayes’ theorem, implemented using a sequential Monte Carlo algorithm, is used to estimate the source parameters in order to take into account the large uncertainties in the observations and formulated models. The system is verified with novel, outdoor, fully automated experiments, where observations from the UAV are used to estimate the parameters of a diffusive source. The estimation performance of the algorithm is assessed subject to various flight path configurations and wind speeds. Observations and lessons learned during these unique experiments are discussed and areas for future research are identified.", "Received 28 May 2002; revised 18 September 2002; accepted 24 September 2002; published 14 February 2003. [1] When an inverse problem is solved to estimate an unknown function such as the hydraulic conductivity in an aquifer or the contamination history at a site, one constraint is that the unknown function is known to be everywhere nonnegative. In this work, we develop a statistically rigorous method for enforcing function nonnegativity in Bayesian inverse problems. The proposed method behaves similarly to a Gaussian process with a linear variogram (i.e., unrestricted Brownian motion) for parameter values significantly greater than zero. The method uses the method of images to define a prior probability density function based on reflected Brownian motion that implicitly enforces nonnegativity. This work focuses on problems where the unknown is a function of a single variable (e.g., time). A Markov chain Monte Carlo (MCMC) method, specifically, a highly efficient Gibbs sampler, is implemented to generate conditional realizations of the unknown function. The new method is applied to the estimation of the trichloroethylene (TCE) and perchloroethylene (PCE) contamination history in an aquifer at Dover Air Force Base, Delaware, based on concentration profiles obtained from an underlying aquitard. INDEX TERMS: 1831 Hydrology: Groundwater quality; 1869 Hydrology: Stochastic processes; 3260 Mathematical Geophysics: Inverse theory; KEYWORDS: stochastic inverse modeling, contaminant source identification, inference under constraints, Markov chain Monte Carlo (MCMC), Gibbs sampling, Bayesian inference", "A key problem of robotic environmental sensing and monitoring is that of active sensing: How can a team of robots plan the most informative observation paths to minimize the uncertainty in modeling and predicting an environmental phenomenon? This paper presents two principled approaches to efficient information-theoretic path planning based on entropy and mutual information criteria for in situ active sensing of an important broad class of widely-occurring environmental phenomena called anisotropic fields. Our proposed algorithms are novel in addressing a trade-off between active sensing performance and time efficiency. An important practical consequence is that our algorithms can exploit the spatial correlation structure of Gaussian process-based anisotropic fields to improve time efficiency while preserving near-optimal active sensing performance. We analyze the time complexity of our algorithms and prove analytically that they scale better than state-of-the-art algorithms with increasing planning horizon length. We provide theoretical guarantees on the active sensing performance of our algorithms for a class of exploration tasks called transect sampling, which, in particular, can be improved with longer planning time and or lower spatial correlation along the transect. Empirical evaluation on real-world anisotropic field data shows that our algorithms can perform better or at least as well as the state-of-the-art algorithms while often incurring a few orders of magnitude less computational time, even when the field conditions are less favorable." ] }
1901.09839
2911535260
While the success of deep neural networks (DNNs) is well-established across a variety of domains, our ability to explain and interpret these methods is limited. Unlike previously proposed local methods which try to explain particular classification decisions, we focus on global interpretability and ask a universally applicable question: given a trained model, which features are the most important? In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable dependence into feature ranking. Our methodological contributions in this paper are two-fold. First, we propose an effect size analogue for DNNs that is appropriate for applications with highly collinear predictors (ubiquitous in computer vision). Second, we extend the recently proposed "RelATive cEntrality" (RATE) measure (, 2019) to the Bayesian deep learning setting. RATE applies an information theoretic criterion to the posterior distribution of effect sizes to assess feature significance. We apply our framework to three broad application areas: computer vision, natural language processing, and social science.
One viable approach for achieving global interpretability is to train more conventional statistical methods to mimic the predictive behavior of a DNN. This imitation model is then retrospectively used to explain the predictions that a DNN would make. For example, using a decision tree @cite_18 or falling rule list @cite_24 can yield straightforward characterizations of predictive outcomes. Unfortunately, these simple models can struggle to mimic the accuracy of DNNs effectively. A random forest, on the other hand, is much more capable of matching the predictive power of neural networks. Here, measures of feature selection can be computed by permuting information within the input variables and examining this null effect on test accuracy or Gini impurity @cite_23 . The ability establish variable importance in random forests is a significant reason for their popularity in fields such as the life sciences @cite_28 --- thus, providing motivation for developing analogous approaches for DNNs.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_28", "@cite_23" ], "mid": [ "2907176385", "2745742138", "2808523546", "2589209256" ], "abstract": [ "As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other multiplicative noise based variational distributions has not been investigated in depth. We evaluated Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We tested the calibration of the probabilistic predictions of Bayesian convolutional neural networks (CNNs) on MNIST and CIFAR-10. Sampling at prediction time increased the calibration of the DNNs' probabalistic predictions. Sampling weights, whether Gaussian or Bernoulli, led to more robust representation of uncertainty compared to sampling of units. However, using either Gaussian or Bernoulli dropout led to increased test set classification accuracy. Based on these findings we used both Bernoulli dropout and Gaussian dropconnect concurrently, which we show approximates the use of a spike-and-slab variational distribution without increasing the number of learned parameters. We found that spike-and-slab sampling had higher test set performance than Gaussian dropconnect and more robustly represented its uncertainty compared to Bernoulli dropout.", "Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions: 1) We propose a metric to evaluate the consistency level of neurons in a network quantitatively. 2) We find that the learned features of neurons are ambiguous by leveraging adversarial examples. 3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.", "This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24 by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53 .", "As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modelling uncertainty is one of the key features of Bayesian methods. Bayesian DNNs that use dropout-based variational distributions and scale to complex tasks have recently been proposed. We evaluate Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We compare these Bayesian DNNs ability to represent their uncertainty about their outputs through sampling during inference. We tested the calibration of these Bayesian fully connected and convolutional DNNs on two visual inference tasks (MNIST and CIFAR-10). By adding different levels of Gaussian noise to the test images, we assessed how these DNNs represented their uncertainty about regions of input space not covered by the training set. These Bayesian DNNs represented their own uncertainty more accurately than traditional DNNs with a softmax output. We find that sampling of weights, whether Gaussian or Bernoulli, led to more accurate representation of uncertainty compared to sampling of units. However, sampling units using either Gaussian or Bernoulli dropout led to increased convolutional neural network (CNN) classification accuracy. Based on these findings we use both Bernoulli dropout and Gaussian dropconnect concurrently, which approximates the use of a spike-and-slab variational distribution. We find that networks with spike-and-slab sampling combine the advantages of the other methods: they classify with high accuracy and robustly represent the uncertainty of their classifications for all tested architectures." ] }
1901.09590
2914592219
Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER.
RESCAL An early linear model, RESCAL @cite_18 , optimizes a scoring function containing a bilinear product between vector embeddings for each subject and object entity and a full rank matrix for each relation. Although a very expressive and powerful model, RESCAL is prone to overfitting due to its large number of parameters, which increases quadratically in the embedding dimension with the number of relations in a knowledge graph.
{ "cite_N": [ "@cite_18" ], "mid": [ "2951077644", "2101043704", "2024051019", "1969415786" ], "abstract": [ "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "We design a new distribution over poly(r e-1) x n matrices S so that for any fixed n x d matrix A of rank r, with probability at least 9 10, SAx2 = (1 pm e)Ax2 simultaneously for all x ∈ Rd. Such a matrix S is called a subspace embedding. Furthermore, SA can be computed in O(nnz(A)) + O(r2e-2) time, where nnz(A) is the number of non-zero entries of A. This improves over all previous subspace embeddings, which required at least Ω(nd log d) time to achieve this property. We call our matrices S sparse embedding matrices. Using our sparse embedding matrices, we obtain the fastest known algorithms for overconstrained least-squares regression, low-rank approximation, approximating all leverage scores, and lp-regression: to output an x' for which Ax'-b2 ≤ (1+e)minx Ax-b2 for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A)) + O(d3e-2) time, and another in O(nnz(A)log(1 e)) + O(d3log(1 e)) time. (Here O(f) = f ⋅ logO(1)(f).) to obtain a decomposition of an n x n matrix A into a product of an n x k matrix L, a k x k diagonal matrix D, and a n x k matrix W, for which F A - L D W ≤ (1+e)F A-Ak , where Ak is the best rank-k approximation, our algorithm runs in O(nnz(A)) + O(nk2 e-4log n + k3e-5log2n) time. to output an approximation to all leverage scores of an n x d input matrix A simultaneously, with constant relative error, our algorithms run in O(nnz(A) log n) + O(r3) time. to output an x' for which Ax'-bp ≤ (1+e)minx Ax-bp for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A) log n) + poly(r e-1) time, for any constant 1 ≤ p", "In this paper, we propose a rank minimization method to fuse the predicted confidence scores of multiple models, each of which is obtained based on a certain kind of feature. Specifically, we convert each confidence score vector obtained from one model into a pairwise relationship matrix, in which each entry characterizes the comparative relationship of scores of two test samples. Our hypothesis is that the relative score relations are consistent among component models up to certain sparse deviations, despite the large variations that may exist in the absolute values of the raw scores. Then we formulate the score fusion problem as seeking a shared rank-2 pairwise relationship matrix based on which each original score matrix from individual model can be decomposed into the common rank-2 matrix and sparse deviation errors. A robust score vector is then extracted to fit the recovered low rank score relation matrix. We formulate the problem as a nuclear norm and l 1 norm optimization objective function and employ the Augmented Lagrange Multiplier (ALM) method for the optimization. Our method is isotonic (i.e., scale invariant) to the numeric scales of the scores originated from different models. We experimentally show that the proposed method achieves significant performance gains on various tasks including object categorization and video event detection.", "Variable selection in the linear regression model takes many apparent faces from both frequentist and Bayesian standpoints. In this paper we introduce a variable selection method referred to as a rescaled spike and slab model. We study the importance of prior hierarchical specifications and draw connections to frequentist generalized ridge regression estimation. Specifically, we study the usefulness of continuous bimodal priors to model hypervariance parameters, and the effect scaling has on the posterior mean through its relationship to penalization. Several model selection strategies, some frequentist and some Bayesian in nature, are developed and studied theoretically. We demonstrate the importance of selective shrinkage for effective variable selection in terms of risk misclassification, and show this is achieved using the posterior from a rescaled spike and slab model. We also show how to verify a procedure's ability to reduce model uncertainty in finite samples using a specialized forward selection strategy. Using this tool, we illustrate the effectiveness of rescaled spike and slab models in reducing model uncertainty." ] }