arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
sequencelengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2006.05712
Listen to What You Want: Neural Network-based Universal Sound Selector
Being able to control the acoustic events (AEs) to which we want to listen would allow the development of more controllable hearable devices. This paper addresses the AE sound selection (or removal) problems, that we define as the extraction (or suppression) of all the sounds that belong to one or multiple desired AE classes. Although this problem could be addressed with a combination of source separation followed by AE classification, this is a sub-optimal way of solving the problem. Moreover, source separation usually requires knowing the maximum number of sources, which may not be practical when dealing with AEs. In this paper, we propose instead a universal sound selection neural network that enables to directly select AE sounds from a mixture given user-specified target AE classes. The proposed framework can be explicitly optimized to simultaneously select sounds from multiple desired AE classes, independently of the number of sources in the mixture. We experimentally show that the proposed method achieves promising AE sound selection performance and could be generalized to mixtures with a number of sources that are unseen during training.
http://arxiv.org/pdf/2006.05712v1
[ "Tsubasa Ochiai", "Marc Delcroix", "Yuma Koizumi", "Hiroaki Ito", "Keisuke Kinoshita", "Shoko Araki" ]
2020-06-10T08:06:02Z
2020-06-10T08:06:02Z
2003.00203
Contextual Policy Transfer in Reinforcement Learning Domains via Deep Mixtures-of-Experts
In reinforcement learning, agents that consider the context, or current state, when selecting source policies for transfer have been shown to outperform context-free approaches. However, none of the existing approaches transfer knowledge contextually from model-based learners to a model-free learner. This could be useful, for instance, when source policies are intentionally learned on diverse simulations with plentiful data but transferred to a real-world setting with limited data. In this paper, we assume knowledge of estimated source task dynamics and policies, and common sub-goals but different dynamics. We introduce a novel deep mixture-of-experts formulation for learning state-dependent beliefs over source task dynamics that match the target dynamics using state trajectories collected from the target task. The mixture model is easy to interpret, demonstrates robustness to estimation errors in dynamics, and is compatible with most learning algorithms. We then show how this model can be incorporated into standard policy reuse frameworks, and demonstrate its effectiveness on benchmarks from OpenAI-Gym.
http://arxiv.org/pdf/2003.00203v2
[ "Michael Gimelfarb", "Scott Sanner", "Chi-Guhn Lee" ]
2020-06-10T08:11:44Z
2020-02-29T07:58:36Z
2006.05720
Extrapolation for Large-batch Training in Deep Learning
Deep learning networks are typically trained by Stochastic Gradient Descent (SGD) methods that iteratively improve the model parameters by estimating a gradient on a very small fraction of the training data. A major roadblock faced when increasing the batch size to a substantial fraction of the training data for improving training time is the persistent degradation in performance (generalization gap). To address this issue, recent work propose to add small perturbations to the model parameters when computing the stochastic gradients and report improved generalization performance due to smoothing effects. However, this approach is poorly understood; it requires often model-specific noise and fine-tuning. To alleviate these drawbacks, we propose to use instead computationally efficient extrapolation (extragradient) to stabilize the optimization trajectory while still benefiting from smoothing to avoid sharp minima. This principled approach is well grounded from an optimization perspective and we show that a host of variations can be covered in a unified framework that we propose. We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer. We demonstrate that in a variety of experiments the scheme allows scaling to much larger batch sizes than before whilst reaching or surpassing SOTA accuracy.
http://arxiv.org/pdf/2006.05720v1
[ "Tao Lin", "Lingjing Kong", "Sebastian U. Stich", "Martin Jaggi" ]
2020-06-10T08:22:41Z
2020-06-10T08:22:41Z
2006.05722
Interferometric Graph Transform: a Deep Unsupervised Graph Representation
We propose the Interferometric Graph Transform (IGT), which is a new class of deep unsupervised graph convolutional neural network for building graph representations. Our first contribution is to propose a generic, complex-valued spectral graph architecture obtained from a generalization of the Euclidean Fourier transform. We show that our learned representation consists of both discriminative and invariant features, thanks to a novel greedy concave objective. From our experiments, we conclude that our learning procedure exploits the topology of the spectral domain, which is normally a flaw of spectral methods, and in particular our method can recover an analytic operator for vision tasks. We test our algorithm on various and challenging tasks such as image classification (MNIST, CIFAR-10), community detection (Authorship, Facebook graph) and action recognition from 3D skeletons videos (SBU, NTU), exhibiting a new state-of-the-art in spectral graph unsupervised settings.
http://arxiv.org/pdf/2006.05722v1
[ "Edouard Oyallon" ]
2020-06-10T08:27:53Z
2020-06-10T08:27:53Z
2006.05725
Bayesian Experience Reuse for Learning from Multiple Demonstrators
Learning from demonstrations (LfD) improves the exploration efficiency of a learning agent by incorporating demonstrations from experts. However, demonstration data can often come from multiple experts with conflicting goals, making it difficult to incorporate safely and effectively in online settings. We address this problem in the static and dynamic optimization settings by modelling the uncertainty in source and target task functions using normal-inverse-gamma priors, whose corresponding posteriors are, respectively, learned from demonstrations and target data using Bayesian neural networks with shared features. We use this learned belief to derive a quadratic programming problem whose solution yields a probability distribution over the expert models. Finally, we propose Bayesian Experience Reuse (BERS) to sample demonstrations in accordance with this distribution and reuse them directly in new tasks. We demonstrate the effectiveness of this approach for static optimization of smooth functions, and transfer learning in a high-dimensional supply chain problem with cost uncertainty.
http://arxiv.org/pdf/2006.05725v1
[ "Michael Gimelfarb", "Scott Sanner", "Chi-Guhn Lee" ]
2020-06-10T08:32:39Z
2020-06-10T08:32:39Z
2006.04666
Misinformation Has High Perplexity
Debunking misinformation is an important and time-critical task as there could be adverse consequences when misinformation is not quashed promptly. However, the usual supervised approach to debunking via misinformation classification requires human-annotated data and is not suited to the fast time-frame of newly emerging events such as the COVID-19 outbreak. In this paper, we postulate that misinformation itself has higher perplexity compared to truthful statements, and propose to leverage the perplexity to debunk false claims in an unsupervised manner. First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims. Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time. We construct two new COVID-19-related test sets, one is scientific, and another is political in content, and empirically verify that our system performs favorably compared to existing systems. We are releasing these datasets publicly to encourage more research in debunking misinformation on COVID-19 and other topics.
http://arxiv.org/pdf/2006.04666v2
[ "Nayeon Lee", "Yejin Bang", "Andrea Madotto", "Pascale Fung" ]
2020-06-10T08:49:30Z
2020-06-08T15:13:44Z
2006.05752
Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization
Distributed optimization is vital in solving large-scale machine learning problems. A widely-shared feature of distributed optimization techniques is the requirement that all nodes complete their assigned tasks in each computational epoch before the system can proceed to the next epoch. In such settings, slow nodes, called stragglers, can greatly slow progress. To mitigate the impact of stragglers, we propose an online distributed optimization method called Anytime Minibatch. In this approach, all nodes are given a fixed time to compute the gradients of as many data samples as possible. The result is a variable per-node minibatch size. Workers then get a fixed communication time to average their minibatch gradients via several rounds of consensus, which are then used to update primal variables via dual averaging. Anytime Minibatch prevents stragglers from holding up the system without wasting the work that stragglers can complete. We present a convergence analysis and analyze the wall time performance. Our numerical results show that our approach is up to 1.5 times faster in Amazon EC2 and it is up to five times faster when there is greater variability in compute node performance.
http://arxiv.org/pdf/2006.05752v1
[ "Nuwan Ferdinand", "Haider Al-Lawati", "Stark C. Draper", "Matthew Nokleby" ]
2020-06-10T09:53:02Z
2020-06-10T09:53:02Z
2006.09977
A novel sentence embedding based topic detection method for micro-blog
Topic detection is a challenging task, especially without knowing the exact number of topics. In this paper, we present a novel approach based on neural network to detect topics in the micro-blogging dataset. We use an unsupervised neural sentence embedding model to map the blogs to an embedding space. Our model is a weighted power mean word embedding model, and the weights are calculated by attention mechanism. Experimental result shows our embedding method performs better than baselines in sentence clustering. In addition, we propose an improved clustering algorithm referred as relationship-aware DBSCAN (RADBSCAN). It can discover topics from a micro-blogging dataset, and the topic number depends on dataset character itself. Moreover, in order to solve the problem of parameters sensitive, we take blog forwarding relationship as a bridge of two independent clusters. Finally, we validate our approach on a dataset from sina micro-blog. The result shows that we can detect all the topics successfully and extract keywords in each topic.
http://arxiv.org/pdf/2006.09977v1
[ "Cong Wan", "Shan Jiang", "Cuirong Wang", "Cong Wang", "Changming Xu", "Xianxia Chen", "Ying Yuan" ]
2020-06-10T09:58:57Z
2020-06-10T09:58:57Z
2006.05757
Data science on industrial data -- Today's challenges in brown field applications
Much research is done on data analytics and machine learning. In industrial processes large amounts of data are available and many researchers are trying to work with this data. In practical approaches one finds many pitfalls restraining the application of modern technologies especially in brown field applications. With this paper we want to show state of the art and what to expect when working with stock machines in the field. A major focus in this paper is on data collection which can be more cumbersome than most people might expect. Also data quality for machine learning applications is a challenge once leaving the laboratory. In this area one has to expect the lack of semantic description of the data as well as very little ground truth being available for training and verification of machine learning models. A last challenge is IT security and passing data through firewalls.
http://arxiv.org/abs/2006.05757v1
[ "Tilman Klaeger", "Sebastian Gottschall", "Lukas Oehm" ]
2020-06-10T10:05:16Z
2020-06-10T10:05:16Z
2002.08326
Balancing Efficiency and Flexibility for DNN Acceleration via Temporal GPU-Systolic Array Integration
The research interest in specialized hardware accelerators for deep neural networks (DNN) spikes recently owing to their superior performance and efficiency. However, today's DNN accelerators primarily focus on accelerating specific "kernels" such as convolution and matrix multiplication, which are vital but only part of an end-to-end DNN-enabled application. Meaningful speedups over the entire application often require supporting computations that are, while massively parallel, ill-suited to DNN accelerators. Integrating a general-purpose processor such as a CPU or a GPU incurs significant data movement overhead and leads to resource under-utilization on the DNN accelerators. We propose Simultaneous Multi-mode Architecture (SMA), a novel architecture design and execution model that offers general-purpose programmability on DNN accelerators in order to accelerate end-to-end applications. The key to SMA is the temporal integration of the systolic execution model with the GPU-like SIMD execution model. The SMA exploits the common components shared between the systolic-array accelerator and the GPU, and provides lightweight reconfiguration capability to switch between the two modes in-situ. The SMA achieves up to 63% performance improvement while consuming 23% less energy than the baseline Volta architecture with TensorCore.
http://arxiv.org/pdf/2002.08326v2
[ "Cong Guo", "Yangjie Zhou", "Jingwen Leng", "Yuhao Zhu", "Zidong Du", "Quan Chen", "Chao Li", "Bin Yao", "Minyi Guo" ]
2020-06-10T10:27:55Z
2020-02-18T17:44:20Z
1908.09345
Almost Tune-Free Variance Reduction
The variance reduction class of algorithms including the representative ones, SVRG and SARAH, have well documented merits for empirical risk minimization problems. However, they require grid search to tune parameters (step size and the number of iterations per inner loop) for optimal performance. This work introduces `almost tune-free' SVRG and SARAH schemes equipped with i) Barzilai-Borwein (BB) step sizes; ii) averaging; and, iii) the inner loop length adjusted to the BB step sizes. In particular, SVRG, SARAH, and their BB variants are first reexamined through an `estimate sequence' lens to enable new averaging methods that tighten their convergence rates theoretically, and improve their performance empirically when the step size or the inner loop length is chosen large. Then a simple yet effective means to adjust the number of iterations per inner loop is developed to enhance the merits of the proposed averaging schemes and BB step sizes. Numerical tests corroborate the proposed methods.
http://arxiv.org/pdf/1908.09345v2
[ "Bingcong Li", "Lingda Wang", "Georgios B. Giannakis" ]
2020-06-10T12:14:42Z
2019-08-25T15:24:04Z
1903.11508
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems
Visual modifications to text are often used to obfuscate offensive comments in social media (e.g., "!d10t") or as a writing style ("1337" in "leet speak"), among other scenarios. We consider this as a new type of adversarial attack in NLP, a setting to which humans are very robust, as our experiments with both simple and more difficult visual input perturbations demonstrate. We then investigate the impact of visual adversarial attacks on current NLP systems on character-, word-, and sentence-level tasks, showing that both neural and non-neural models are, in contrast to humans, extremely sensitive to such attacks, suffering performance decreases of up to 82%. We then explore three shielding methods---visual character embeddings, adversarial training, and rule-based recovery---which substantially improve the robustness of the models. However, the shielding methods still fall behind performances achieved in non-attack scenarios, which demonstrates the difficulty of dealing with visual attacks.
http://arxiv.org/pdf/1903.11508v2
[ "Steffen Eger", "Gözde Gül Şahin", "Andreas Rücklé", "Ji-Ung Lee", "Claudia Schulz", "Mohsen Mesgar", "Krishnkant Swarnkar", "Edwin Simpson", "Iryna Gurevych" ]
2020-06-10T12:20:04Z
2019-03-27T16:01:18Z
1911.03127
AI Aided Noise Processing of Spintronic Based IoT Sensor for Magnetocardiography Application
As we are about to embark upon the highly hyped "Society 5.0", powered by the Internet of Things (IoT), traditional ways to monitor human heart signals for tracking cardio-vascular conditions are challenging, particularly in remote healthcare settings. On the merits of low power consumption, portability, and non-intrusiveness, there are no suitable IoT solutions that can provide information comparable to the conventional Electrocardiography (ECG). In this paper, we propose an IoT device utilizing a spintronic ultra-sensitive sensor that measures the magnetic fields produced by cardio-vascular electrical activity, i.e. Magentocardiography (MCG). After that, we treat the low-frequency noise generated by the sensors, which is also a challenge for most other sensors dealing with low-frequency bio-magnetic signals. Instead of relying on generic signal processing techniques such as averaging or filtering, we employ deep-learning training on bio-magnetic signals. Using an existing dataset of ECG records, MCG labels are synthetically constructed. A unique deep learning structure composed of combined Convolutional Neural Network (CNN) with Gated Recurrent Unit (GRU) is trained using the labeled data moving through a striding window, which is able to smartly capture and eliminate the noise features. Simulation results are reported to evaluate the effectiveness of the proposed method that demonstrates encouraging performance.
http://arxiv.org/pdf/1911.03127v2
[ "Attayeb Mohsen", "Muftah Al-Mahdawi", "Mostafa M. Fouda", "Mikihiko Oogane", "Yasuo Ando", "Zubair Md Fadlullah" ]
2020-06-10T14:26:03Z
2019-11-08T08:45:54Z
2006.05866
Heterogeneous Graph Attention Networks for Early Detection of Rumors on Twitter
With the rapid development of mobile Internet technology and the widespread use of mobile devices, it becomes much easier for people to express their opinions on social media. The openness and convenience of social media platforms provide a free expression for people but also cause new social problems. The widespread of false rumors on social media can bring about the panic of the public and damage personal reputation, which makes rumor automatic detection technology become particularly necessary. The majority of existing methods for rumor detection focus on mining effective features from text contents, user profiles, and patterns of propagation. Nevertheless, these methods do not take full advantage of global semantic relations of the text contents, which characterize the semantic commonality of rumors as a key factor for detecting rumors. In this paper, we construct a tweet-word-user heterogeneous graph based on the text contents and the source tweet propagations of rumors. A meta-path based heterogeneous graph attention network framework is proposed to capture the global semantic relations of text contents, together with the global structure information of source tweet propagations for rumor detection. Experiments on real-world Twitter data demonstrate the superiority of the proposed approach, which also has a comparable ability to detect rumors at a very early stage.
http://arxiv.org/pdf/2006.05866v1
[ "Qi Huang", "Junshuai Yu", "Jia Wu", "Bin Wang" ]
2020-06-10T14:49:08Z
2020-06-10T14:49:08Z
2002.03469
Projected Stein Variational Gradient Descent
The curse of dimensionality is a longstanding challenge in Bayesian inference in high dimensions. In this work, we propose a projected Stein variational gradient descent (pSVGD) method to overcome this challenge by exploiting the fundamental property of intrinsic low dimensionality of the data informed subspace stemming from ill-posedness of such problems. We adaptively construct the subspace using a gradient information matrix of the log-likelihood, and apply pSVGD to the much lower-dimensional coefficients of the parameter projection. The method is demonstrated to be more accurate and efficient than SVGD. It is also shown to be more scalable with respect to the number of parameters, samples, data points, and processor cores via experiments with parameters dimensions ranging from the hundreds to the tens of thousands.
http://arxiv.org/pdf/2002.03469v2
[ "Peng Chen", "Omar Ghattas" ]
2020-06-10T15:00:24Z
2020-02-09T23:17:30Z
2006.05879
Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of calls to the generative models needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent sample complexity result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.
http://arxiv.org/pdf/2006.05879v1
[ "Anders Jonsson", "Emilie Kaufmann", "Pierre Ménard", "Omar Darwiche Domingues", "Edouard Leurent", "Michal Valko" ]
2020-06-10T15:05:51Z
2020-06-10T15:05:51Z
2006.05884
AdaSense: Adaptive Low-Power Sensing and Activity Recognition for Wearable Devices
Wearable devices have strict power and memory limitations. As a result, there is a need to optimize the power consumption on those devices without sacrificing the accuracy. This paper presents AdaSense: a sensing, feature extraction and classification co-optimized framework for Human Activity Recognition. The proposed techniques reduce the power consumption by dynamically switching among different sensor configurations as a function of the user activity. The framework selects configurations that represent the pareto-frontier of the accuracy and energy trade-off. AdaSense also uses low-overhead processing and classification methodologies. The introduced approach achieves 69% reduction in the power consumption of the sensor with less than 1.5% decrease in the activity recognition accuracy.
http://arxiv.org/pdf/2006.05884v1
[ "Marina Neseem", "Jon Nelson", "Sherief Reda" ]
2020-06-10T15:17:11Z
2020-06-10T15:17:11Z
2006.01731
Data-Driven Methods to Monitor, Model, Forecast and Control Covid-19 Pandemic: Leveraging Data Science, Epidemiology and Control Theory
This document analyzes the role of data-driven methodologies in Covid-19 pandemic. We provide a SWOT analysis and a roadmap that goes from the access to data sources to the final decision-making step. We aim to review the available methodologies while anticipating the difficulties and challenges in the development of data-driven strategies to combat the Covid-19 pandemic. A 3M-analysis is presented: Monitoring, Modelling and Making decisions. The focus is on the potential of well-known datadriven schemes to address different challenges raised by the pandemic: i) monitoring and forecasting the spread of the epidemic; (ii) assessing the effectiveness of government decisions; (iii) making timely decisions. Each step of the roadmap is detailed through a review of consolidated theoretical results and their potential application in the Covid-19 context. When possible, we provide examples of their applications on past or present epidemics. We do not provide an exhaustive enumeration of methodologies, algorithms and applications. We do try to serve as a bridge between different disciplines required to provide a holistic approach to the epidemic: data science, epidemiology, controltheory, etc. That is, we highlight effective data-driven methodologies that have been shown to be successful in other contexts and that have potential application in the different steps of the proposed roadmap. To make this document more functional and adapted to the specifics of each discipline, we encourage researchers and practitioners to provide feedback. We will update this document regularly.
http://arxiv.org/pdf/2006.01731v2
[ "Teodoro Alamo", "D. G. Reina", "Pablo Millán" ]
2020-06-10T15:25:14Z
2020-06-01T12:56:43Z
1910.07772
Teaching Vehicles to Anticipate: A Systematic Study on Probabilistic Behavior Prediction Using Large Data Sets
By observing their environment as well as other traffic participants, humans are enabled to drive road vehicles safely. Vehicle passengers, however, perceive a notable difference between non-experienced and experienced drivers. In particular, they may get the impression that the latter ones anticipate what will happen in the next few moments and consider these foresights in their driving behavior. To make the driving style of automated vehicles comparable to the one of human drivers with respect to comfort and perceived safety, the aforementioned anticipation skills need to become a built-in feature of self-driving vehicles. This article provides a systematic comparison of methods and strategies to generate this intention for self-driving cars using machine learning techniques. To implement and test these algorithms we use a large data set collected over more than 30000 km of highway driving and containing approximately 40000 real-world driving situations. We further show that it is possible to classify driving maneuvers upcoming within the next 5 s with an Area Under the ROC Curve (AUC) above 0.92 for all defined maneuver classes. This enables us to predict the lateral position with a prediction horizon of 5 s with a median lateral error of less than 0.21 m.
http://arxiv.org/abs/1910.07772v4
[ "Florian Wirthmüller", "Julian Schlechtriemen", "Jochen Hipp", "Manfred Reichert" ]
2020-06-10T15:43:01Z
2019-10-17T08:42:40Z
1905.10428
LdSM: Logarithm-depth Streaming Multi-label Decision Trees
We consider multi-label classification where the goal is to annotate each data point with the most relevant $textit{subset}$ of labels from an extremely large label set. Efficient annotation can be achieved with balanced tree predictors, i.e. trees with logarithmic-depth in the label complexity, whose leaves correspond to labels. Designing prediction mechanism with such trees for real data applications is non-trivial as it needs to accommodate sending examples to multiple leaves while at the same time sustain high prediction accuracy. In this paper we develop the LdSM algorithm for the construction and training of multi-label decision trees, where in every node of the tree we optimize a novel objective function that favors balanced splits, maintains high class purity of children nodes, and allows sending examples to multiple directions but with a penalty that prevents tree over-growth. Each node of the tree is trained once the previous node is completed leading to a streaming approach for training. We analyze the proposed objective theoretically and show that minimizing it leads to pure and balanced data splits. Furthermore, we show a boosting theorem that captures its connection to the multi-label classification error. Experimental results on benchmark data sets demonstrate that our approach achieves high prediction accuracy and low prediction time and position LdSM as a competitive tool among existing state-of-the-art approaches.
http://arxiv.org/pdf/1905.10428v5
[ "Maryam Majzoubi", "Anna Choromanska" ]
2020-06-10T16:06:30Z
2019-05-24T20:01:27Z
2006.05923
Cross-Sensor Adversarial Domain Adaptation of Landsat-8 and Proba-V images for Cloud Detection
The number of Earth observation satellites carrying optical sensors with similar characteristics is constantly growing. Despite their similarities and the potential synergies among them, derived satellite products are often developed for each sensor independently. Differences in retrieved radiances lead to significant drops in accuracy, which hampers knowledge and information sharing across sensors. This is particularly harmful for machine learning algorithms, since gathering new ground truth data to train models for each sensor is costly and requires experienced manpower. In this work, we propose a domain adaptation transformation to reduce the statistical differences between images of two satellite sensors in order to boost the performance of transfer learning models. The proposed methodology is based on the Cycle Consistent Generative Adversarial Domain Adaptation (CyCADA) framework that trains the transformation model in an unpaired manner. In particular, Landsat-8 and Proba-V satellites, which present different but compatible spatio-spectral characteristics, are used to illustrate the method. The obtained transformation significantly reduces differences between the image datasets while preserving the spatial and spectral information of adapted images, which is hence useful for any general purpose cross-sensor application. In addition, the training of the proposed adversarial domain adaptation model can be modified to improve the performance in a specific remote sensing application, such as cloud detection, by including a dedicated term in the cost function. Results show that, when the proposed transformation is applied, cloud detection models trained in Landsat-8 data increase cloud detection accuracy in Proba-V.
http://arxiv.org/pdf/2006.05923v1
[ "Gonzalo Mateo-García", "Valero Laparra", "Dan López-Puigdollers", "Luis Gómez-Chova" ]
2020-06-10T16:16:01Z
2020-06-10T16:16:01Z
2002.08196
Federated Learning in the Sky: Joint Power Allocation and Scheduling with UAV Swarms
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks ranging from coordinated trajectory planning to cooperative target recognition. However, due to the lack of continuous connections between the UAV swarm and ground base stations (BSs), using centralized ML will be challenging, particularly when dealing with a large volume of data. In this paper, a novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs. Each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network. To identify how wireless factors, like fading, transmission delay, and UAV antenna angle deviations resulting from wind and mechanical vibrations, impact the performance of FL, a rigorous convergence analysis for FL is performed. Then, a joint power allocation and scheduling design is proposed to optimize the convergence rate of FL while taking into account the energy consumption during convergence and the delay requirement imposed by the swarm's control system. Simulation results validate the effectiveness of the FL convergence analysis and show that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.
http://arxiv.org/pdf/2002.08196v2
[ "Tengchan Zeng", "Omid Semiari", "Mohammad Mozaffari", "Mingzhe Chen", "Walid Saad", "Mehdi Bennis" ]
2020-06-10T16:19:18Z
2020-02-19T14:04:01Z
2006.05935
Learning to Play Table Tennis From Scratch using Muscular Robots
Dynamic tasks like table tennis are relatively easy to learn for humans but pose significant challenges to robots. Such tasks require accurate control of fast movements and precise timing in the presence of imprecise state estimation of the flying ball and the robot. Reinforcement Learning (RL) has shown promise in learning of complex control tasks from data. However, applying step-based RL to dynamic tasks on real systems is safety-critical as RL requires exploring and failing safely for millions of time steps in high-speed regimes. In this paper, we demonstrate that safe learning of table tennis using model-free Reinforcement Learning can be achieved by using robot arms driven by pneumatic artificial muscles (PAMs). Softness and back-drivability properties of PAMs prevent the system from leaving the safe region of its state space. In this manner, RL empowers the robot to return and smash real balls with 5 ms and 12ms on average to a desired landing point. Our setup allows the agent to learn this safety-critical task (i) without safety constraints in the algorithm, (ii) while maximizing the speed of returned balls directly in the reward function (iii) using a stochastic policy that acts directly on the low-level controls of the real system and (iv) trains for thousands of trials (v) from scratch without any prior knowledge. Additionally, we present HYSR, a practical hybrid sim and real training that avoids playing real balls during training by randomly replaying recorded ball trajectories in simulation and applying actions to the real robot. This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls. Videos and datasets are available at muscularTT.embodied.ml.
http://arxiv.org/pdf/2006.05935v1
[ "Dieter Büchler", "Simon Guist", "Roberto Calandra", "Vincent Berenz", "Bernhard Schölkopf", "Jan Peters" ]
2020-06-10T16:43:27Z
2020-06-10T16:43:27Z
2006.05939
Is the Skip Connection Provable to Reform the Neural Network Loss Landscape?
The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to ``guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works cite{freeman2016topology}, it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number $m$ of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very ``shallow". The ``depth" of these local minima are at most $O(m^{(eta-1)/n})$, where $n$ is the input dimension, $eta<1$. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.
http://arxiv.org/pdf/2006.05939v1
[ "Lifu Wang", "Bo Shen", "Ning Zhao", "Zhiyuan Zhang" ]
2020-06-10T16:46:19Z
2020-06-10T16:46:19Z
2006.05976
Composite Logconcave Sampling with a Restricted Gaussian Oracle
We consider sampling from composite densities on $mathbb{R}^d$ of the form $dpi(x) propto exp(-f(x) - g(x))dx$ for well-conditioned $f$ and convex (but possibly non-smooth) $g$, a family generalizing restrictions to a convex set, through the abstraction of a restricted Gaussian oracle. For $f$ with condition number $kappa$, our algorithm runs in $O left(kappa^2 d log^2tfrac{kappa d}{epsilon}right)$ iterations, each querying a gradient of $f$ and a restricted Gaussian oracle, to achieve total variation distance $epsilon$. The restricted Gaussian oracle, which draws samples from a distribution whose negative log-likelihood sums a quadratic and $g$, has been previously studied and is a natural extension of the proximal oracle used in composite optimization. Our algorithm is conceptually simple and obtains stronger provable guarantees and greater generality than existing methods for composite sampling. We conduct experiments showing our algorithm vastly improves upon the hit-and-run algorithm for sampling the restriction of a (non-diagonal) Gaussian to the positive orthant.
http://arxiv.org/pdf/2006.05976v1
[ "Ruoqi Shen", "Kevin Tian", "Yin Tat Lee" ]
2020-06-10T17:43:55Z
2020-06-10T17:43:55Z
2006.05990
What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study
In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents.
http://arxiv.org/pdf/2006.05990v1
[ "Marcin Andrychowicz", "Anton Raichuk", "Piotr Stańczyk", "Manu Orsini", "Sertan Girgin", "Raphael Marinier", "Léonard Hussenot", "Matthieu Geist", "Olivier Pietquin", "Marcin Michalski", "Sylvain Gelly", "Olivier Bachem" ]
2020-06-10T17:59:03Z
2020-06-10T17:59:03Z
1906.02629
When Does Label Smoothing Help?
The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation and speech recognition. Despite its widespread use, label smoothing is still poorly understood. Here we show empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search. However, we also observe that if a teacher network is trained with label smoothing, knowledge distillation into a student network is much less effective. To explain these observations, we visualize how label smoothing changes the representations learned by the penultimate layer of the network. We show that label smoothing encourages the representations of training examples from the same class to group in tight clusters. This results in loss of information in the logits about resemblances between instances of different classes, which is necessary for distillation, but does not hurt generalization or calibration of the model's predictions.
http://arxiv.org/pdf/1906.02629v3
[ "Rafael Müller", "Simon Kornblith", "Geoffrey Hinton" ]
2020-06-10T18:18:17Z
2019-06-06T15:03:11Z
1910.04938
Regret Analysis of Bandit Problems with Causal Background Knowledge
We study how to learn optimal interventions sequentially given causal information represented as a causal graph along with associated conditional distributions. Causal modeling is useful in real world problems like online advertisement where complex causal mechanisms underlie the relationship between interventions and outcomes. We propose two algorithms, causal upper confidence bound (C-UCB) and causal Thompson Sampling (C-TS), that enjoy improved cumulative regret bounds compared with algorithms that do not use causal information. We thus resolve an open problem posed by cite{lattimore2016causal}. Further, we extend C-UCB and C-TS to the linear bandit setting and propose causal linear UCB (CL-UCB) and causal linear TS (CL-TS) algorithms. These algorithms enjoy a cumulative regret bound that only scales with the feature dimension. Our experiments show the benefit of using causal information. For example, we observe that even with a few hundreds of iterations, the regret of causal algorithms is less than that of standard algorithms by a factor of three. We also show that under certain causal structures, our algorithms scale better than the standard bandit algorithms as the number of interventions increases.
http://arxiv.org/pdf/1910.04938v3
[ "Yangyi Lu", "Amirhossein Meisami", "Ambuj Tewari", "Zhenyu Yan" ]
2020-06-10T18:31:45Z
2019-10-11T02:00:32Z
2002.03936
Subclass Distillation
After a large "teacher" neural network has been trained on labeled data, the probabilities that the teacher assigns to incorrect classes reveal a lot of information about the way in which the teacher generalizes. By training a small "student" model to match these probabilities, it is possible to transfer most of the generalization ability of the teacher to the student, often producing a much better small model than directly training the student on the training data. The transfer works best when there are many possible classes because more is then revealed about the function learned by the teacher, but in cases where there are only a few possible classes we show that we can improve the transfer by forcing the teacher to divide each class into many subclasses that it invents during the supervised training. The student is then trained to match the subclass probabilities. For datasets where there are known, natural subclasses we demonstrate that the teacher learns similar subclasses and these improve distillation. For clickthrough datasets where the subclasses are unknown we demonstrate that subclass distillation allows the student to learn faster and better.
http://arxiv.org/pdf/2002.03936v2
[ "Rafael Müller", "Simon Kornblith", "Geoffrey Hinton" ]
2020-06-10T18:32:14Z
2020-02-10T16:45:30Z
1904.02664
Empirical Bayes Regret Minimization
Most bandit algorithm designs are purely theoretical. Therefore, they have strong regret guarantees, but also are often too conservative in practice. In this work, we pioneer the idea of algorithm design by minimizing the empirical Bayes regret, the average regret over problem instances sampled from a known distribution. We focus on a tractable instance of this problem, the confidence interval and posterior width tuning, and propose an efficient algorithm for solving it. The tuning algorithm is analyzed and evaluated in multi-armed, linear, and generalized linear bandits. We report several-fold reductions in Bayes regret for state-of-the-art bandit algorithms, simply by optimizing over a small sample from a distribution.
http://arxiv.org/pdf/1904.02664v4
[ "Chih-Wei Hsu", "Branislav Kveton", "Ofer Meshi", "Martin Mladenov", "Csaba Szepesvari" ]
2020-06-10T18:47:04Z
2019-04-04T17:00:02Z
2006.06033
Learning normalizing flows from Entropy-Kantorovich potentials
We approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropy-regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions. This formulation allows us to train a dual objective comprised only of the scalar potential functions, and removes the burden of explicitly computing normalizing flows during training. After training, the normalizing flow is easily recovered from the potential functions.
http://arxiv.org/pdf/2006.06033v1
[ "Chris Finlay", "Augusto Gerolin", "Adam M Oberman", "Aram-Alexandre Pooladian" ]
2020-06-10T18:58:26Z
2020-06-10T18:58:26Z
2006.08472
Physics informed deep learning for computational elastodynamics without labeled data
Numerical methods such as finite element have been flourishing in the past decades for modeling solid mechanics problems via solving governing partial differential equations (PDEs). A salient aspect that distinguishes these numerical methods is how they approximate the physical fields of interest. Physics-informed deep learning is a novel approach recently developed for modeling PDE solutions and shows promise to solve computational mechanics problems without using any labeled data. The philosophy behind it is to approximate the quantity of interest (e.g., PDE solution variables) by a deep neural network (DNN) and embed the physical law to regularize the network. To this end, training the network is equivalent to minimization of a well-designed loss function that contains the PDE residuals and initial/boundary conditions (I/BCs). In this paper, we present a physics-informed neural network (PINN) with mixed-variable output to model elastodynamics problems without resort to labeled data, in which the I/BCs are hardly imposed. In particular, both the displacement and stress components are taken as the DNN output, inspired by the hybrid finite element analysis, which largely improves the accuracy and trainability of the network. Since the conventional PINN framework augments all the residual loss components in a "soft" manner with Lagrange multipliers, the weakly imposed I/BCs cannot not be well satisfied especially when complex I/BCs are present. To overcome this issue, a composite scheme of DNNs is established based on multiple single DNNs such that the I/BCs can be satisfied forcibly in a "hard" manner. The propose PINN framework is demonstrated on several numerical elasticity examples with different I/BCs, including both static and dynamic problems as well as wave propagation in truncated domains. Results show the promise of PINN in the context of computational mechanics applications.
http://arxiv.org/pdf/2006.08472v1
[ "Chengping Rao", "Hao Sun", "Yang Liu" ]
2020-06-10T19:05:08Z
2020-06-10T19:05:08Z
2006.06037
On the Maximum Mutual Information Capacity of Neural Architectures
We derive the closed-form expression of the maximum mutual information - the maximum value of $I(X;Z)$ obtainable via training - for a broad family of neural network architectures. The quantity is essential to several branches of machine learning theory and practice. Quantitatively, we show that the maximum mutual information for these families all stem from generalizations of a single catch-all formula. Qualitatively, we show that the maximum mutual information of an architecture is most strongly influenced by the width of the smallest layer of the network - the "information bottleneck" in a different sense of the phrase, and by any statistical invariances captured by the architecture.
http://arxiv.org/pdf/2006.06037v1
[ "Brandon Foggo", "Nanpeng Yu" ]
2020-06-10T19:20:12Z
2020-06-10T19:20:12Z
2006.07137
STONNE: A Detailed Architectural Simulator for Flexible Neural Network Accelerators
The design of specialized architectures for accelerating the inference procedure of Deep Neural Networks (DNNs) is a booming area of research nowadays. First-generation rigid proposals have been rapidly replaced by more advanced flexible accelerator architectures able to efficiently support a variety of layer types and dimensions. As the complexity of the designs grows, it is more and more appealing for researchers to have cycle-accurate simulation tools at their disposal to allow for fast and accurate design-space exploration, and rapid quantification of the efficacy of architectural enhancements during the early stages of a design. To this end, we present STONNE (Simulation TOol of Neural Network Engines), a cycle-accurate, highly-modular and highly-extensible simulation framework that enables end-to-end evaluation of flexible accelerator architectures running complete contemporary DNN models. We use STONNE to model the recently proposed MAERI architecture and show how it can closely approach the performance results of the publicly available BSV-coded MAERI implementation. Then, we conduct a comprehensive evaluation and demonstrate that the folding strategy implemented for MAERI results in very low compute unit utilization (25% on average across 5 DNN models) which in the end translates into poor performance.
http://arxiv.org/pdf/2006.07137v1
[ "Francisco Muñoz-Martínez", "José L. Abellán", "Manuel E. Acacio", "Tushar Krishna" ]
2020-06-10T19:20:52Z
2020-06-10T19:20:52Z
1907.09569
MemNet: Memory-Efficiency Guided Neural Architecture Search with Augment-Trim learning
Recent studies on automatic neural architectures search have demonstrated significant performance, competitive to or even better than hand-crafted neural architectures. However, most of the existing network architecture tend to use residual, parallel structures and concatenation block between shallow and deep features to construct a large network. This requires large amounts of memory for storing both weights and feature maps. This is challenging for mobile and embedded devices since they may not have enough memory to perform inference with the designed large network model. To close this gap, we propose MemNet, an augment-trim learning-based neural network search framework that optimizes not only performance but also memory requirement. Specifically, it employs memory consumption based ranking score which forces an upper bound on memory consumption for navigating the search process. Experiment results show that, as compared to the state-of-the-art efficient designing methods, MemNet can find an architecture which can achieve competitive accuracy and save an average of 24.17% on the total memory needed.
http://arxiv.org/pdf/1907.09569v2
[ "Peiye Liu", "Bo Wu", "Huadong Ma", "Mingoo Seok" ]
2020-06-10T20:12:57Z
2019-07-22T20:49:53Z
2006.06057
Scalable Partial Explainability in Neural Networks via Flexible Activation Functions
Achieving transparency in black-box deep learning algorithms is still an open challenge. High dimensional features and decisions given by deep neural networks (NN) require new algorithms and methods to expose its mechanisms. Current state-of-the-art NN interpretation methods (e.g. Saliency maps, DeepLIFT, LIME, etc.) focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself. In current deep NN operations, there is uncertainty over the exact role played by neurons with fixed activation functions. In this paper, we achieve partially explainable learning model by symbolically explaining the role of activation functions (AF) under a scalable topology. This is carried out by modeling the AFs as adaptive Gaussian Processes (GP), which sit within a novel scalable NN topology, based on the Kolmogorov-Arnold Superposition Theorem (KST). In this scalable NN architecture, the AFs are generated by GP interpolation between control points and can thus be tuned during the back-propagation procedure via gradient descent. The control points act as the core enabler to both local and global adjustability of AF, where the GP interpolation constrains the intrinsic autocorrelation to avoid over-fitting. We show that there exists a trade-off between the NN's expressive power and interpretation complexity, under linear KST topology scaling. To demonstrate this, we perform a case study on a binary classification dataset of banknote authentication. By quantitatively and qualitatively investigating the mapping relationship between inputs and output, our explainable model can provide interpretation over each of the one-dimensional attributes. These early results suggest that our model has the potential to act as the final interpretation layer for deep neural networks.
http://arxiv.org/pdf/2006.06057v1
[ "Schyler C. Sun", "Chen Li", "Zhuangkun Wei", "Antonios Tsourdos", "Weisi Guo" ]
2020-06-10T20:30:15Z
2020-06-10T20:30:15Z
2006.06059
Joint Training of Variational Auto-Encoder and Latent Energy-Based Model
This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM). The joint training of VAE and latent EBM are based on an objective function that consists of three Kullback-Leibler divergences between three joint distributions on the latent vector and the image, and the objective function is of an elegant symmetric and anti-symmetric form of divergence triangle that seamlessly integrates variational and adversarial learning. In this joint training scheme, the latent EBM serves as a critic of the generator model, while the generator model and the inference model in VAE serve as the approximate synthesis sampler and inference sampler of the latent EBM. Our experiments show that the joint training greatly improves the synthesis quality of the VAE. It also enables learning of an energy function that is capable of detecting out of sample examples for anomaly detection.
http://arxiv.org/pdf/2006.06059v1
[ "Tian Han", "Erik Nijkamp", "Linqi Zhou", "Bo Pang", "Song-Chun Zhu", "Ying Nian Wu" ]
2020-06-10T20:32:25Z
2020-06-10T20:32:25Z
2006.06061
Deterministic Gaussian Averaged Neural Networks
We present a deterministic method to compute the Gaussian average of neural networks used in regression and classification. Our method is based on an equivalence between training with a particular regularized loss, and the expected values of Gaussian averages. We use this equivalence to certify models which perform well on clean data but are not robust to adversarial perturbations. In terms of certified accuracy and adversarial robustness, our method is comparable to known stochastic methods such as randomized smoothing, but requires only a single model evaluation during inference.
http://arxiv.org/pdf/2006.06061v1
[ "Ryan Campbell", "Chris Finlay", "Adam M Oberman" ]
2020-06-10T20:53:31Z
2020-06-10T20:53:31Z
2006.09288
Uncovering the Underlying Physics of Degrading System Behavior Through a Deep Neural Network Framework: The Case of Remaining Useful Life Prognosis
Deep learning (DL) has become an essential tool in prognosis and health management (PHM), commonly used as a regression algorithm for the prognosis of a system's behavior. One particular metric of interest is the remaining useful life (RUL) estimated using monitoring sensor data. Most of these deep learning applications treat the algorithms as black-box functions, giving little to no control of the data interpretation. This becomes an issue if the models break the governing laws of physics or other natural sciences when no constraints are imposed. The latest research efforts have focused on applying complex DL models to achieve a low prediction error rather than studying how the models interpret the behavior of the data and the system itself. In this paper, we propose an open-box approach using a deep neural network framework to explore the physics of degradation through partial differential equations (PDEs). The framework has three stages, and it aims to discover a latent variable and corresponding PDE to represent the health state of the system. Models are trained as a supervised regression and designed to output the RUL as well as a latent variable map that can be used and interpreted as the system's health indicator.
http://arxiv.org/pdf/2006.09288v1
[ "Sergio Cofre-Martel", "Enrique Lopez Droguett", "Mohammad Modarres" ]
2020-06-10T21:05:59Z
2020-06-10T21:05:59Z
2006.06071
Affective Movement Generation using Laban Effort and Shape and Hidden Markov Models
Body movements are an important communication medium through which affective states can be discerned. Movements that convey affect can also give machines life-like attributes and help to create a more engaging human-machine interaction. This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling. The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements. Given a desired motion path on which a target emotion is to be overlaid, the proposed approach searches a labeled dataset in the LMA Effort and Shape space for similar movements to the desired motion path that convey the target emotion. An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that is a modulated version of the desired motion path that conveys the target emotion. The extent of modulation can be varied, trading-off between kinematic and affective constraints in the generated movement. The proposed approach is tested using a full-body movement dataset. The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study. The target emotions were correctly recognized from the generated movements at a rate of 72% using the recognition model. Furthermore, participants in the user study were able to correctly perceive the target emotions from a sample of generated movements, although some cases of confusion were also observed.
http://arxiv.org/pdf/2006.06071v1
[ "Ali Samadani", "Rob Gorbet", "Dana Kulic" ]
2020-06-10T21:24:26Z
2020-06-10T21:24:26Z
2005.02987
DenoiSeg: Joint Denoising and Segmentation
Microscopy image analysis often requires the segmentation of objects, but training data for this task is typically scarce and hard to obtain. Here we propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations. We achieve this by extending Noise2Void, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations. The reason for the success of our method is that segmentation can profit from denoising, especially when performed jointly within the same network. The network becomes a denoising expert by seeing all available raw data, while co-learning to segment, even if only a few segmentation labels are available. This hypothesis is additionally fueled by our observation that the best segmentation results on high quality (very low noise) raw data are obtained when moderate amounts of synthetic noise are added. This renders the denoising-task non-trivial and unleashes the desired co-learning effect. We believe that DenoiSeg offers a viable way to circumvent the tremendous hunger for high quality training data and effectively enables few-shot learning of dense segmentations.
http://arxiv.org/pdf/2005.02987v2
[ "Tim-Oliver Buchholz", "Mangal Prakash", "Alexander Krull", "Florian Jug" ]
2020-06-10T21:58:18Z
2020-05-06T17:42:54Z
2002.08320
Proceedings of the Artificial Intelligence for Cyber Security (AICS) Workshop 2020
The workshop will focus on the application of artificial intelligence to problems in cyber security. AICS 2020 emphasis will be on human-machine teaming within the context of cyber security problems and will specifically explore collaboration between human operators and AI technologies. The workshop will address applicable areas of AI, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. Further, cyber security application areas with a particular emphasis on the characterization and deployment of human-machine teaming will be the focus.
http://arxiv.org/pdf/2002.08320v2
[ "Dennis Ross", "Arunesh Sinha", "Diane Staheli", "Bill Streilein" ]
2020-06-10T22:03:20Z
2020-02-07T18:12:00Z
2006.06090
Robustified Multivariate Regression and Classification Using Distributionally Robust Optimization under the Wasserstein Metric
We develop Distributionally Robust Optimization (DRO) formulations for Multivariate Linear Regression (MLR) and Multiclass Logistic Regression (MLG) when both the covariates and responses/labels may be contaminated by outliers. The DRO framework uses a probabilistic ambiguity set defined as a ball of distributions that are close to the empirical distribution of the training set in the sense of the Wasserstein metric. We relax the DRO formulation into a regularized learning problem whose regularizer is a norm of the coefficient matrix. We establish out-of-sample performance guarantees for the solutions to our model, offering insights on the role of the regularizer in controlling the prediction error. Experimental results show that our approach improves the predictive error by 7% -- 37% for MLR, and a metric of robustness by 100% for MLG.
http://arxiv.org/pdf/2006.06090v1
[ "Ruidi Chen", "Ioannis Ch. Paschalidis" ]
2020-06-10T22:16:50Z
2020-06-10T22:16:50Z
2006.06094
Robust Grouped Variable Selection Using Distributionally Robust Optimization
We propose a Distributionally Robust Optimization (DRO) formulation with a Wasserstein-based uncertainty set for selecting grouped variables under perturbations on the data for both linear regression and classification problems. The resulting model offers robustness explanations for Grouped Least Absolute Shrinkage and Selection Operator (GLASSO) algorithms and highlights the connection between robustness and regularization. We prove probabilistic bounds on the out-of-sample loss and the estimation bias, and establish the grouping effect of our estimator, showing that coefficients in the same group converge to the same value as the sample correlation between covariates approaches 1. Based on this result, we propose to use the spectral clustering algorithm with the Gaussian similarity function to perform grouping on the predictors, which makes our approach applicable without knowing the grouping structure a priori. We compare our approach to an array of alternatives and provide extensive numerical results on both synthetic data and a real large dataset of surgery-related medical records, showing that our formulation produces an interpretable and parsimonious model that encourages sparsity at a group level and is able to achieve better prediction and estimation performance in the presence of outliers.
http://arxiv.org/pdf/2006.06094v1
[ "Ruidi Chen", "Ioannis Ch. Paschalidis" ]
2020-06-10T22:32:52Z
2020-06-10T22:32:52Z
2006.14698
Entanglement-Embedded Recurrent Network Architecture: Tensorized Latent State Propagation and Chaos Forecasting
Chaotic time series forecasting has been far less understood despite its tremendous potential in theory and real-world applications. Traditional statistical/ML methods are inefficient to capture chaos in nonlinear dynamical systems, especially when the time difference $Delta t$ between consecutive steps is so large that a trivial, ergodic local minimum would most likely be reached instead. Here, we introduce a new long-short-term-memory (LSTM)-based recurrent architecture by tensorizing the cell-state-to-state propagation therein, keeping the long-term memory feature of LSTM while simultaneously enhancing the learning of short-term nonlinear complexity. We stress that the global minima of chaos can be most efficiently reached by tensorization where all nonlinear terms, up to some polynomial order, are treated explicitly and weighted equally. The efficiency and generality of our architecture are systematically tested and confirmed by theoretical analysis and experimental results. In our design, we have explicitly used two different many-body entanglement structures---matrix product states (MPS) and the multiscale entanglement renormalization ansatz (MERA)---as physics-inspired tensor decomposition techniques, from which we find that MERA generally performs better than MPS, hence conjecturing that the learnability of chaos is determined not only by the number of free parameters but also the tensor complexity---recognized as how entanglement entropy scales with varying matricization of the tensor.
http://arxiv.org/pdf/2006.14698v1
[ "Xiangyi Meng", "Tong Yang" ]
2020-06-10T23:03:33Z
2020-06-10T23:03:33Z
2006.06134
Kalman Filter Based Multiple Person Head Tracking
For multi-target tracking, target representation plays a crucial rule in performance. State-of-the-art approaches rely on the deep learning-based visual representation that gives an optimal performance at the cost of high computational complexity. In this paper, we come up with a simple yet effective target representation for human tracking. Our inspiration comes from the fact that the human body goes through severe deformation and inter/intra occlusion over the passage of time. So, instead of tracking the whole body part, a relative rigid organ tracking is selected for tracking the human over an extended period of time. Hence, we followed the tracking-by-detection paradigm and generated the target hypothesis of only the spatial locations of heads in every frame. After the localization of head location, a Kalman filter with a constant velocity motion model is instantiated for each target that follows the temporal evolution of the targets in the scene. For associating the targets in the consecutive frames, combinatorial optimization is used that associates the corresponding targets in a greedy fashion. Qualitative results are evaluated on four challenging video surveillance dataset and promising results has been achieved.
http://arxiv.org/pdf/2006.06134v1
[ "Mohib Ullah", "Maqsood Mahmud", "Habib Ullah", "Kashif Ahmad", "Ali Shariq Imran", "Faouzi Alaya Cheikh" ]
2020-06-11T00:54:45Z
2020-06-11T00:54:45Z
2006.06135
Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces under a generative model. If $Q$-function is Lipschitz continuous, then the minimal sample complexity for estimating $epsilon$-optimal $Q$-function is known to scale as ${Omega}(frac{1}{epsilon^{d_1+d_2 +2}})$ per classical non-parametric learning theory, where $d_1$ and $d_2$ denote the dimensions of the state and action spaces respectively. The $Q$-function, when viewed as a kernel, induces a Hilbert-Schmidt operator and hence possesses square-summable spectrum. This motivates us to consider a parametric class of $Q$-functions parameterized by its "rank" $r$, which contains all Lipschitz $Q$-functions as $r to infty$. As our key contribution, we develop a simple, iterative learning algorithm that finds $epsilon$-optimal $Q$-function with sample complexity of $widetilde{O}(frac{1}{epsilon^{max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $gamma$ is below a certain threshold. Thus, this provides an exponential improvement in sample complexity. To enable our result, we develop a novel Matrix Estimation algorithm that faithfully estimates an unknown low-rank matrix in the $ell_infty$ sense even in the presence of arbitrary bounded noise, which might be of interest in its own right. Empirical results on several stochastic control tasks confirm the efficacy of our "low-rank" algorithms.
http://arxiv.org/pdf/2006.06135v1
[ "Devavrat Shah", "Dogyoon Song", "Zhi Xu", "Yuzhe Yang" ]
2020-06-11T00:55:35Z
2020-06-11T00:55:35Z
2006.06136
Weighted Lasso Estimates for Sparse Logistic Regression: Non-asymptotic Properties with Measurement Error
When we are interested in high-dimensional system and focus on classification performance, the $ell_{1}$-penalized logistic regression is becoming important and popular. However, the Lasso estimates could be problematic when penalties of different coefficients are all the same and not related to the data. We proposed two types of weighted Lasso estimates depending on covariates by the McDiarmid inequality. Given sample size $n$ and dimension of covariates $p$, the finite sample behavior of our proposed methods with a diverging number of predictors is illustrated by non-asymptotic oracle inequalities such as $ell_{1}$-estimation error and squared prediction error of the unknown parameters. We compare the performance of our methods with former weighted estimates on simulated data, then apply these methods to do real data analysis.
http://arxiv.org/pdf/2006.06136v1
[ "Huamei Huang", "Yujing Gao", "Huiming Zhang", "Bo Li" ]
2020-06-11T00:58:14Z
2020-06-11T00:58:14Z
2006.05043
Learning Navigation Costs from Demonstration with Semantic Observations
This paper focuses on inverse reinforcement learning (IRL) for autonomous robot navigation using semantic observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert's observations and state-control trajectory. We develop a map encoder, which infers semantic class probabilities from the observation sequence, and a cost encoder, defined as deep neural network over the semantic features. Since the expert cost is not directly observable, the representation parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. The error is optimized using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. We show that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of cars, sidewalks and road lanes.
http://arxiv.org/pdf/2006.05043v2
[ "Tianyu Wang", "Vikas Dhiman", "Nikolay Atanasov" ]
2020-06-11T01:17:56Z
2020-06-09T04:35:57Z
2003.00617
Approximate Cross-validation: Guarantees for Model Assessment and Selection
Cross-validation (CV) is a popular approach for assessing and selecting predictive models. However, when the number of folds is large, CV suffers from a need to repeatedly refit a learning procedure on a large number of training datasets. Recent work in empirical risk minimization (ERM) approximates the expensive refitting with a single Newton step warm-started from the full training set optimizer. While this can greatly reduce runtime, several open questions remain including whether these approximations lead to faithful model selection and whether they are suitable for non-smooth objectives. We address these questions with three main contributions: (i) we provide uniform non-asymptotic, deterministic model assessment guarantees for approximate CV; (ii) we show that (roughly) the same conditions also guarantee model selection performance comparable to CV; (iii) we provide a proximal Newton extension of the approximate CV framework for non-smooth prediction problems and develop improved assessment guarantees for problems such as l1-regularized ERM.
http://arxiv.org/pdf/2003.00617v2
[ "Ashia Wilson", "Maximilian Kasy", "Lester Mackey" ]
2020-06-11T02:03:47Z
2020-03-02T00:30:00Z
2004.14547
DSAC: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement Learning
In this paper, we present a new reinforcement learning (RL) algorithm called Distributional Soft Actor Critic (DSAC), which exploits the distributional information of accumulated rewards to achieve better performance. Seamlessly integrating SAC (which uses entropy to encourage exploration) with a principled distributional view of the underlying objective, DSAC takes into consideration the randomness in both action and rewards, and beats the state-of-the-art baselines in several continuous control benchmarks. Moreover, with the distributional information of rewards, we propose a unified framework for risk-sensitive learning, one that goes beyond maximizing only expected accumulated rewards. Under this framework we discuss three specific risk-related metrics: percentile, mean-variance and distorted expectation. Our extensive experiments demonstrate that with distribution modeling in RL, the agent performs better for both risk-averse and risk-seeking control tasks.
http://arxiv.org/pdf/2004.14547v2
[ "Xiaoteng Ma", "Li Xia", "Zhengyuan Zhou", "Jun Yang", "Qianchuan Zhao" ]
2020-06-11T02:08:35Z
2020-04-30T02:23:15Z
2006.04518
More Information Supervised Probabilistic Deep Face Embedding Learning
Researches using margin based comparison loss demonstrate the effectiveness of penalizing the distance between face feature and their corresponding class centers. Despite their popularity and excellent performance, they do not explicitly encourage the generic embedding learning for an open set recognition problem. In this paper, we analyse margin based softmax loss in probability view. With this perspective, we propose two general principles: 1) monotonic decreasing and 2) margin probability penalty, for designing new margin loss functions. Unlike methods optimized with single comparison metric, we provide a new perspective to treat open set face recognition as a problem of information transmission. And the generalization capability for face embedding is gained with more clean information. An auto-encoder architecture called Linear-Auto-TS-Encoder(LATSE) is proposed to corroborate this finding. Extensive experiments on several benchmarks demonstrate that LATSE help face embedding to gain more generalization capability and it boosted the single model performance with open training dataset to more than $99%$ on MegaFace test.
http://arxiv.org/pdf/2006.04518v2
[ "Ying Huang", "Shangfeng Qiu", "Wenwei Zhang", "Xianghui Luo", "Jinzhuo Wang" ]
2020-06-11T02:25:56Z
2020-06-08T12:33:32Z
2006.06156
Image Deconvolution via Noise-Tolerant Self-Supervised Inversion
We propose a general framework for solving inverse problems in the presence of noise that requires no signal prior, no noise estimate, and no clean training data. We only require that the forward model be available and that the noise be statistically independent across measurement dimensions. We build upon the theory of $mathcal{J}$-invariant functions (Batson & Royer 2019, arXiv:1901.11365) and show how self-supervised denoising emph{`a la} Noise2Self is a special case of learning a noise-tolerant pseudo-inverse of the identity. We demonstrate our approach by showing how a convolutional neural network can be taught in a self-supervised manner to deconvolve images and surpass in image quality classical inversion schemes such as Lucy-Richardson deconvolution.
http://arxiv.org/pdf/2006.06156v1
[ "Hirofumi Kobayashi", "Ahmet Can Solak", "Joshua Batson", "Loic A. Royer" ]
2020-06-11T02:27:23Z
2020-06-11T02:27:23Z
1910.02743
Neural network integral representations with the ReLU activation function
In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function. We assume that the outer weighs admit a finite $L_1$-norm with respect to Lebesgue measure on the sphere. For univariate target functions we further provide a closed-form formula for all possible representations. Additionally, in this case our formula allows one to explicitly solve the least $L_1$-norm neural network representation for a given function.
http://arxiv.org/pdf/1910.02743v3
[ "Armenak Petrosyan", "Anton Dereventsov", "Clayton Webster" ]
2020-06-11T03:13:22Z
2019-10-07T12:00:37Z
2006.06173
Borrowing From the Future: Addressing Double Sampling in Model-free Control
In model-free reinforcement learning, the temporal difference method and its variants become unstable when combined with nonlinear function approximations. Bellman residual minimization with stochastic gradient descent (SGD) is more stable, but it suffers from the double sampling problem: given the current state, two independent samples for the next state are required, but often only one sample is available. Recently, the authors of [Zhu et al, 2020] introduced the borrowing from the future (BFF) algorithm to address this issue for the prediction problem. The main idea is to borrow extra randomness from the future to approximately re-sample the next state when the underlying dynamics of the problem are sufficiently smooth. This paper extends the BFF algorithm to action-value function based model-free control. We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions. We confirm our theoretical findings with numerical simulations.
http://arxiv.org/pdf/2006.06173v1
[ "Yuhua Zhu", "Zach Izzo", "Lexing Ying" ]
2020-06-11T03:50:37Z
2020-06-11T03:50:37Z
2006.06183
G5: A Universal GRAPH-BERT for Graph-to-Graph Transfer and Apocalypse Learning
The recent GRAPH-BERT model introduces a new approach to learning graph representations merely based on the attention mechanism. GRAPH-BERT provides an opportunity for transferring pre-trained models and learned graph representations across different tasks within the same graph dataset. In this paper, we will further investigate the graph-to-graph transfer of a universal GRAPH-BERT for graph representation learning across different graph datasets, and our proposed model is also referred to as the G5 for simplicity. Many challenges exist in learning G5 to adapt the distinct input and output configurations for each graph data source, as well as the information distributions differences. G5 introduces a pluggable model architecture: (a) each data source will be pre-processed with a unique input representation learning component; (b) each output application task will also have a specific functional component; and (c) all such diverse input and output components will all be conjuncted with a universal GRAPH-BERT core component via an input size unification layer and an output representation fusion layer, respectively. The G5 model removes the last obstacle for cross-graph representation learning and transfer. For the graph sources with very sparse training data, the G5 model pre-trained on other graphs can still be utilized for representation learning with necessary fine-tuning. What's more, the architecture of G5 also allows us to learn a supervised functional classifier for data sources without any training data at all. Such a problem is also named as the Apocalypse Learning task in this paper. Two different label reasoning strategies, i.e., Cross-Source Classification Consistency Maximization (CCCM) and Cross-Source Dynamic Routing (CDR), are introduced in this paper to address the problem.
http://arxiv.org/pdf/2006.06183v1
[ "Jiawei Zhang" ]
2020-06-11T04:19:18Z
2020-06-11T04:19:18Z
2006.06185
JIT-Masker: Efficient Online Distillation for Background Matting
We design a real-time portrait matting pipeline for everyday use, particularly for "virtual backgrounds" in video conferences. Existing segmentation and matting methods prioritize accuracy and quality over throughput and efficiency, and our pipeline enables trading off a controllable amount of accuracy for better throughput by leveraging online distillation on the input video stream. We construct our own dataset of simulated video calls in various scenarios, and show that our approach delivers a 5x speedup over a saliency detection based pipeline in a non-GPU accelerated setting while delivering higher quality results. We demonstrate that an online distillation approach can feasibly work as part of a general, consumer level product as a "virtual background" tool. Our public implementation is at https://github.com/josephch405/jit-masker.
http://arxiv.org/pdf/2006.06185v1
[ "Jo Chuang", "Qian Dong" ]
2020-06-11T04:28:09Z
2020-06-11T04:28:09Z
2006.14002
Bi-Level Graph Neural Networks for Drug-Drug Interaction Prediction
We introduce Bi-GNN for modeling biological link prediction tasks such as drug-drug interaction (DDI) and protein-protein interaction (PPI). Taking drug-drug interaction as an example, existing methods using machine learning either only utilize the link structure between drugs without using the graph representation of each drug molecule, or only leverage the individual drug compound structures without using graph structure for the higher-level DDI graph. The key idea of our method is to fundamentally view the data as a bi-level graph, where the highest level graph represents the interaction between biological entities (interaction graph), and each biological entity itself is further expanded to its intrinsic graph representation (representation graphs), where the graph is either flat like a drug compound or hierarchical like a protein with amino acid level graph, secondary structure, tertiary structure, etc. Our model not only allows the usage of information from both the high-level interaction graph and the low-level representation graphs, but also offers a baseline for future research opportunities to address the bi-level nature of the data.
http://arxiv.org/pdf/2006.14002v1
[ "Yunsheng Bai", "Ken Gu", "Yizhou Sun", "Wei Wang" ]
2020-06-11T04:49:26Z
2020-06-11T04:49:26Z
2002.03519
Self-Attentive Associative Memory
Heretofore, neural networks with external memory are restricted to single memory with lossy representations of memory interactions. A rich representation of relationships between memory pieces urges a high-order and segregated relational memory. In this paper, we propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory). The idea is implemented through a novel Self-attentive Associative Memory (SAM) operator. Found upon outer product, SAM forms a set of associative memories that represent the hypothetical high-order relationships between arbitrary pairs of memory elements, through which a relational memory is constructed from an item memory. The two memories are wired into a single sequential model capable of both memorization and relational reasoning. We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks, from challenging synthetic problems to practical testbeds such as geometry, graph, reinforcement learning, and question answering.
http://arxiv.org/pdf/2002.03519v3
[ "Hung Le", "Truyen Tran", "Svetha Venkatesh" ]
2020-06-11T04:56:52Z
2020-02-10T03:27:48Z
2006.06196
An Edge Information and Mask Shrinking Based Image Inpainting Approach
In the image inpainting task, the ability to repair both high-frequency and low-frequency information in the missing regions has a substantial influence on the quality of the restored image. However, existing inpainting methods usually fail to consider both high-frequency and low-frequency information simultaneously. To solve this problem, this paper proposes edge information and mask shrinking based image inpainting approach, which consists of two models. The first model is an edge generation model used to generate complete edge information from the damaged image, and the second model is an image completion model used to fix the missing regions with the generated edge information and the valid contents of the damaged image. The mask shrinking strategy is employed in the image completion model to track the areas to be repaired. The proposed approach is evaluated qualitatively and quantitatively on the dataset Places2. The result shows our approach outperforms state-of-the-art methods.
http://arxiv.org/pdf/2006.06196v1
[ "Huali Xu", "Xiangdong Su", "Meng Wang", "Xiang Hao", "Guanglai Gao" ]
2020-06-11T05:15:52Z
2020-06-11T05:15:52Z
1905.09882
Scale Invariant Power Iteration
Power iteration has been generalized to solve many interesting problems in machine learning and statistics. Despite its striking success, theoretical understanding of when and how such an algorithm enjoys good convergence property is limited. In this work, we introduce a new class of optimization problems called scale invariant problems and prove that they can be efficiently solved by scale invariant power iteration (SCI-PI) with a generalized convergence guarantee of power iteration. By deriving that a stationary point is an eigenvector of the Hessian evaluated at the point, we show that scale invariant problems indeed resemble the leading eigenvector problem near a local optimum. Also, based on a novel reformulation, we geometrically derive SCI-PI which has a general form of power iteration. The convergence analysis shows that SCI-PI attains local linear convergence with a rate being proportional to the top two eigenvalues of the Hessian at the optimum. Moreover, we discuss some extended settings of scale invariant problems and provide similar convergence results for them. In numerical experiments, we introduce applications to independent component analysis, Gaussian mixtures, and non-negative matrix factorization. Experimental results demonstrate that SCI-PI is competitive to state-of-the-art benchmark algorithms and often yield better solutions.
http://arxiv.org/pdf/1905.09882v2
[ "Cheolmin Kim", "Youngseok Kim", "Diego Klabjan" ]
2020-06-11T05:25:50Z
2019-05-23T19:24:52Z
2006.06218
Model-Size Reduction for Reservoir Computing by Concatenating Internal States Through Time
Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called "reservoirs." To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. These proposed methods are analyzed based on information processing capacity, which is a performance measure of RC proposed by Dambre et al. (2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Henon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error. Because the applications of the proposed methods are not limited to a specific network structure of the reservoir, the proposed methods could further improve the energy efficiency of RC-based systems, such as FPGAs and photonic systems.
http://arxiv.org/abs/2006.06218v1
[ "Yusuke Sakemi", "Kai Morino", "Timothée Leleu", "Kazuyuki Aihara" ]
2020-06-11T06:11:03Z
2020-06-11T06:11:03Z
1709.10152
A Simple and Fast Algorithm for L1-norm Kernel PCA
We present an algorithm for L1-norm kernel PCA and provide a convergence analysis for it. While an optimal solution of L2-norm kernel PCA can be obtained through matrix decomposition, finding that of L1-norm kernel PCA is not trivial due to its non-convexity and non-smoothness. We provide a novel reformulation through which an equivalent, geometrically interpretable problem is obtained. Based on the geometric interpretation of the reformulated problem, we present a fixed-point type algorithm that iteratively computes a binary weight for each observation. As the algorithm requires only inner products of data vectors, it is computationally efficient and the kernel trick is applicable. In the convergence analysis, we show that the algorithm converges to a local optimal solution in a finite number of steps. Moreover, we provide a rate of convergence analysis, which has been never done for any L1-norm PCA algorithm, proving that the sequence of objective values converges at a linear rate. In numerical experiments, we show that the algorithm is robust in the presence of entry-wise perturbations and computationally scalable, especially in a large-scale setting. Lastly, we introduce an application to outlier detection where the model based on the proposed algorithm outperforms the benchmark algorithms.
http://arxiv.org/abs/1709.10152v2
[ "Cheolmin Kim", "Diego Klabjan" ]
2020-06-11T06:14:27Z
2017-09-28T20:03:14Z
1901.05049
Bonseyes AI Pipeline -- bringing AI to you. End-to-end integration of data, algorithms and deployment tools
Next generation of embedded Information and Communication Technology (ICT) systems are collaborative systems able to perform autonomous tasks. The remarkable expansion of the embedded ICT market, together with the rise and breakthroughs of Artificial Intelligence (AI), have put the focus on the Edge as it stands as one of the keys for the next technological revolution: the seamless integration of AI in our daily life. However, training and deployment of custom AI solutions on embedded devices require a fine-grained integration of data, algorithms, and tools to achieve high accuracy. Such integration requires a high level of expertise that becomes a real bottleneck for small and medium enterprises wanting to deploy AI solutions on the Edge which, ultimately, slows down the adoption of AI on daily-life applications. In this work, we present a modular AI pipeline as an integrating framework to bring data, algorithms, and deployment tools together. By removing the integration barriers and lowering the required expertise, we can interconnect the different stages of tools and provide a modular end-to-end development of AI products for embedded devices. Our AI pipeline consists of four modular main steps: i) data ingestion, ii) model training, iii) deployment optimization and, iv) the IoT hub integration. To show the effectiveness of our pipeline, we provide examples of different AI applications during each of the steps. Besides, we integrate our deployment framework, LPDNN, into the AI pipeline and present its lightweight architecture and deployment capabilities for embedded devices. Finally, we demonstrate the results of the AI pipeline by showing the deployment of several AI applications such as keyword spotting, image classification and object detection on a set of well-known embedded platforms, where LPDNN consistently outperforms all other popular deployment frameworks.
http://arxiv.org/abs/1901.05049v3
[ "Miguel de Prado", "Jing Su", "Rabia Saeed", "Lorenzo Keller", "Noelia Vallez", "Andrew Anderson", "David Gregg", "Luca Benini", "Tim Llewellynn", "Nabil Ouerhani", "Rozenn Dahyot and", "Nuria Pazos" ]
2020-06-11T06:41:02Z
2019-01-15T21:27:28Z
1906.08469
Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets
Following detection and tracking of traffic actors, prediction of their future motion is the next critical component of a self-driving vehicle (SDV) technology, allowing the SDV to operate safely and efficiently in its environment. This is particularly important when it comes to vulnerable road users (VRUs), such as pedestrians and bicyclists. These actors need to be handled with special care due to an increased risk of injury, as well as the fact that their behavior is less predictable than that of motorized actors. To address this issue, in the current study we present a deep learning-based method for predicting VRU movement, where we rasterize high-definition maps and actor's surroundings into a bird's-eye view image used as an input to deep convolutional networks. In addition, we propose a fast architecture suitable for real-time inference, and perform an ablation study of various rasterization approaches to find the optimal choice for accurate prediction. The results strongly indicate benefits of using the proposed approach for motion prediction of VRUs, both in terms of accuracy and latency.
http://arxiv.org/pdf/1906.08469v2
[ "Fang-Chieh Chou", "Tsung-Han Lin", "Henggang Cui", "Vladan Radosavljevic", "Thi Nguyen", "Tzu-Kuo Huang", "Matthew Niedoba", "Jeff Schneider", "Nemanja Djuric" ]
2020-06-11T06:54:12Z
2019-06-20T07:16:16Z
2006.05806
Bandit Samplers for Training Graph Neural Networks
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are changing during the training and not known a priori, but only partially observed when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets.
http://arxiv.org/pdf/2006.05806v2
[ "Ziqi Liu", "Zhengwei Wu", "Zhiqiang Zhang", "Jun Zhou", "Shuang Yang", "Le Song", "Yuan Qi" ]
2020-06-11T07:39:31Z
2020-06-10T12:48:37Z
2006.06240
A PDD Decoder for Binary Linear Codes With Neural Check Polytope Projection
Linear Programming (LP) is an important decoding technique for binary linear codes. However, the advantages of LP decoding, such as low error floor and strong theoretical guarantee, etc., come at the cost of high computational complexity and poor performance at the low signal-to-noise ratio (SNR) region. In this letter, we adopt the penalty dual decomposition (PDD) framework and propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem. Furthermore, we propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm, i.e., check polytope projection (CPP). Inspired by the fact that a multi-layer perception (MLP) can theoretically approximate any nonlinear mapping function, we present a specially designed neural CPP (NCPP) algorithm to decrease the decoding latency. Simulation results demonstrate the effectiveness of the proposed algorithms.
http://arxiv.org/pdf/2006.06240v1
[ "Yi Wei", "Ming-Min Zhao", "Min-Jian Zhao", "Ming Lei" ]
2020-06-11T07:57:15Z
2020-06-11T07:57:15Z
2005.06420
The Unstoppable Rise of Computational Linguistics in Deep Learning
In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding.
http://arxiv.org/pdf/2005.06420v3
[ "James Henderson" ]
2020-06-11T07:58:28Z
2020-05-13T16:51:02Z
2004.07427
Asymmetrical Vertical Federated Learning
Federated learning is a distributed machine learning method that aims to preserve the privacy of sample features and labels. In a federated learning system, ID-based sample alignment approaches are usually applied with few efforts made on the protection of ID privacy. In real-life applications, however, the confidentiality of sample IDs, which are the strongest row identifiers, is also drawing much attention from many participants. To relax their privacy concerns about ID privacy, this paper formally proposes the notion of asymmetrical vertical federated learning and illustrates the way to protect sample IDs. The standard private set intersection protocol is adapted to achieve the asymmetrical ID alignment phase in an asymmetrical vertical federated learning system. Correspondingly, a Pohlig-Hellman realization of the adapted protocol is provided. This paper also presents a genuine with dummy approach to achieving asymmetrical federated model training. To illustrate its application, a federated logistic regression algorithm is provided as an example. Experiments are also made for validating the feasibility of this approach.
http://arxiv.org/pdf/2004.07427v3
[ "Yang Liu", "Xiong Zhang", "Libin Wang" ]
2020-06-11T08:20:06Z
2020-04-16T02:53:48Z
2006.05163
ConfNet2Seq: Full Length Answer Generation from Spoken Questions
Conversational and task-oriented dialogue systems aim to interact with the user using natural responses through multi-modal interfaces, such as text or speech. These desired responses are in the form of full-length natural answers generated over facts retrieved from a knowledge source. While the task of generating natural answers to questions from an answer span has been widely studied, there has been little research on natural sentence generation over spoken content. We propose a novel system to generate full length natural language answers from spoken questions and factoid answers. The spoken sequence is compactly represented as a confusion network extracted from a pre-trained Automatic Speech Recognizer. This is the first attempt towards generating full-length natural answers from a graph input(confusion network) to the best of our knowledge. We release a large-scale dataset of 259,788 samples of spoken questions, their factoid answers and corresponding full-length textual answers. Following our proposed approach, we achieve comparable performance with best ASR hypothesis.
http://arxiv.org/abs/2006.05163v2
[ "Vaishali Pal", "Manish Shrivastava", "Laurent Besacier" ]
2020-06-11T08:39:41Z
2020-06-09T10:04:49Z
1909.00116
Statistical Inferences of Linear Forms for Noisy Matrix Completion
We introduce a flexible framework for making inferences about general linear forms of a large matrix based on noisy observations of a subset of its entries. In particular, under mild regularity conditions, we develop a universal procedure to construct asymptotically normal estimators of its linear forms through double-sample debiasing and low-rank projection whenever an entry-wise consistent estimator of the matrix is available. These estimators allow us to subsequently construct confidence intervals for and test hypotheses about the linear forms. Our proposal was motivated by a careful perturbation analysis of the empirical singular spaces under the noisy matrix completion model which might be of independent interest. The practical merits of our proposed inference procedure are demonstrated on both simulated and real-world data examples.
http://arxiv.org/pdf/1909.00116v2
[ "Dong Xia", "Ming Yuan" ]
2020-06-11T08:49:01Z
2019-08-31T03:30:07Z
2006.06261
XiaoiceSing: A High-Quality and Integrated Singing Voice Synthesis System
This paper presents XiaoiceSing, a high-quality singing voice synthesis system which employs an integrated network for spectrum, F0 and duration modeling. We follow the main architecture of FastSpeech while proposing some singing-specific design: 1) Besides phoneme ID and position encoding, features from musical score (e.g.note pitch and length) are also added. 2) To attenuate off-key issues, we add a residual connection in F0 prediction. 3) In addition to the duration loss of each phoneme, the duration of all the phonemes in a musical note is accumulated to calculate the syllable duration loss for rhythm enhancement. Experiment results show that XiaoiceSing outperforms the baseline system of convolutional neural networks by 1.44 MOS on sound quality, 1.18 on pronunciation accuracy and 1.38 on naturalness respectively. In two A/B tests, the proposed F0 and duration modeling methods achieve 97.3% and 84.3% preference rate over baseline respectively, which demonstrates the overwhelming advantages of XiaoiceSing.
http://arxiv.org/pdf/2006.06261v1
[ "Peiling Lu", "Jie Wu", "Jian Luan", "Xu Tan", "Li Zhou" ]
2020-06-11T09:09:59Z
2020-06-11T09:09:59Z
2006.06277
W-net: Simultaneous segmentation of multi-anatomical retinal structures using a multi-task deep neural network
Segmentation of multiple anatomical structures is of great importance in medical image analysis. In this study, we proposed a $mathcal{W}$-net to simultaneously segment both the optic disc (OD) and the exudates in retinal images based on the multi-task learning (MTL) scheme. We introduced a class-balanced loss and a multi-task weighted loss to alleviate the imbalanced problem and to improve the robustness and generalization property of the $mathcal{W}$-net. We demonstrated the effectiveness of our approach by applying five-fold cross-validation experiments on two public datasets e_ophtha_EX and DiaRetDb1. We achieved F1-score of 94.76% and 95.73% for OD segmentation, and 92.80% and 94.14% for exudates segmentation. To further prove the generalization property of the proposed method, we applied the trained model on the DRIONS-DB dataset for OD segmentation and on the MESSIDOR dataset for exudate segmentation. Our results demonstrated that by choosing the optimal weights of each task, the MTL based $mathcal{W}$-net outperformed separate models trained individually on each task. Code and pre-trained models will be available at: url{https://github.com/FundusResearch/MTL_for_OD_and_exudates.git}.
http://arxiv.org/pdf/2006.06277v1
[ "Hongwei Zhao", "Chengtao Peng", "Lei Liu", "Bin Li" ]
2020-06-11T09:33:33Z
2020-06-11T09:33:33Z
2006.06282
A Novel Meta-Heuristic Optimization Algorithm Inspired by the Spread of Viruses
According to the no-free-lunch theorem, there is no single meta-heuristic algorithm that can optimally solve all optimization problems. This motivates many researchers to continuously develop new optimization algorithms. In this paper, a novel nature-inspired meta-heuristic optimization algorithm called virus spread optimization (VSO) is proposed. VSO loosely mimics the spread of viruses among hosts, and can be effectively applied to solving many challenging and continuous optimization problems. We devise a new representation scheme and viral operations that are radically different from previously proposed virus-based optimization algorithms. First, the viral RNA of each host in VSO denotes a potential solution for which different viral operations will help to diversify the searching strategies in order to largely enhance the solution quality. In addition, an imported infection mechanism, inheriting the searched optima from another colony, is introduced to possibly avoid the prematuration of any potential solution in solving complex problems. VSO has an excellent capability to conduct adaptive neighborhood searches around the discovered optima for achieving better solutions. Furthermore, with a flexible infection mechanism, VSO can quickly escape from local optima. To clearly demonstrate both its effectiveness and efficiency, VSO is critically evaluated on a series of well-known benchmark functions. Moreover, VSO is validated on its applicability through two real-world examples including the financial portfolio optimization and optimization of hyper-parameters of support vector machines for classification problems. The results show that VSO has attained superior performance in terms of solution fitness, convergence rate, scalability, reliability, and flexibility when compared to those results of the conventional as well as state-of-the-art meta-heuristic optimization algorithms.
http://arxiv.org/pdf/2006.06282v1
[ "Zhixi Li", "Vincent Tam" ]
2020-06-11T09:35:28Z
2020-06-11T09:35:28Z
2006.05779
Self-Supervised Reinforcement Learning for Recommender Systems
In session-based or sequential recommendation, it is important to consider a number of factors like long-term user engagement, multiple types of user-item interactions such as clicks, purchases etc. The current state-of-the-art supervised approaches fail to model them appropriately. Casting sequential recommendation task as a reinforcement learning (RL) problem is a promising direction. A major component of RL approaches is to train the agent through interactions with the environment. However, it is often problematic to train a recommender in an on-line fashion due to the requirement to expose users to irrelevant recommendations. As a result, learning the policy from logged implicit feedback is of vital importance, which is challenging due to the pure off-policy setting and lack of negative rewards (feedback). In this paper, we propose self-supervised reinforcement learning for sequential recommendation tasks. Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL. The RL part acts as a regularizer to drive the supervised layer focusing on specific rewards(e.g., recommending items which may lead to purchases rather than clicks) while the self-supervised layer with cross-entropy loss provides strong gradient signals for parameter updates. Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC). We integrate the proposed frameworks with four state-of-the-art recommendation models. Experimental results on two real-world datasets demonstrate the effectiveness of our approach.
http://arxiv.org/pdf/2006.05779v2
[ "Xin Xin", "Alexandros Karatzoglou", "Ioannis Arapakis", "Joemon M. Jose" ]
2020-06-11T09:36:45Z
2020-06-10T11:18:57Z
2004.10963
Metric-Learning-Assisted Domain Adaptation
Domain alignment (DA) has been widely used in unsupervised domain adaptation. Many existing DA methods assume that a low source risk, together with the alignment of distributions of source and target, means a low target risk. In this paper, we show that this does not always hold. We thus propose a novel metric-learning-assisted domain adaptation (MLA-DA) method, which employs a novel triplet loss for helping better feature alignment. We explore the relationship between the second largest probability of a target sample's prediction and its distance to the decision boundary. Based on the relationship, we propose a novel mechanism to adaptively adjust the margin in the triplet loss according to target predictions. Experimental results show that the use of proposed triplet loss can achieve clearly better results. We also demonstrate the performance improvement of MLA-DA on all four standard benchmarks compared with the state-of-the-art unsupervised domain adaptation methods. Furthermore, MLA-DA shows stable performance in robust experiments.
http://arxiv.org/pdf/2004.10963v3
[ "Yueming Yin", "Zhen Yang", "Haifeng Hu", "Xiaofu Wu" ]
2020-06-11T09:41:08Z
2020-04-23T04:20:02Z
2006.06293
Multiplicative noise and heavy tails in stochastic optimization
Although stochastic optimization is central to modern machine learning, the precise mechanisms underlying its success, and in particular, the precise role of the stochasticity, still remain unclear. Modelling stochastic optimization algorithms as discrete random recurrence relations, we show that multiplicative noise, as it commonly arises due to variance in local rates of convergence, results in heavy-tailed stationary behaviour in the parameters. A detailed analysis is conducted for SGD applied to a simple linear regression problem, followed by theoretical results for a much larger class of models (including non-linear and non-convex) and optimizers (including momentum, Adam, and stochastic Newton), demonstrating that our qualitative results hold much more generally. In each case, we describe dependence on key factors, including step size, batch size, and data variability, all of which exhibit similar qualitative behavior to recent empirical results on state-of-the-art neural network models from computer vision and natural language processing. Furthermore, we empirically demonstrate how multiplicative noise and heavy-tailed structure improve capacity for basin hopping and exploration of non-convex loss surfaces, over commonly-considered stochastic dynamics with only additive noise and light-tailed structure.
http://arxiv.org/pdf/2006.06293v1
[ "Liam Hodgkinson", "Michael W. Mahoney" ]
2020-06-11T09:58:01Z
2020-06-11T09:58:01Z
2006.06323
Surveys without Questions: A Reinforcement Learning Approach
The 'old world' instrument, survey, remains a tool of choice for firms to obtain ratings of satisfaction and experience that customers realize while interacting online with firms. While avenues for survey have evolved from emails and links to pop-ups while browsing, the deficiencies persist. These include - reliance on ratings of very few respondents to infer about all customers' online interactions; failing to capture a customer's interactions over time since the rating is a one-time snapshot; and inability to tie back customers' ratings to specific interactions because ratings provided relate to all interactions. To overcome these deficiencies we extract proxy ratings from clickstream data, typically collected for every customer's online interactions, by developing an approach based on Reinforcement Learning (RL). We introduce a new way to interpret values generated by the value function of RL, as proxy ratings. Our approach does not need any survey data for training. Yet, on validation against actual survey data, proxy ratings yield reasonable performance results. Additionally, we offer a new way to draw insights from values of the value function, which allow associating specific interactions to their proxy ratings. We introduce two new metrics to represent ratings - one, customer-level and the other, aggregate-level for click actions across customers. Both are defined around proportion of all pairwise, successive actions that show increase in proxy ratings. This intuitive customer-level metric enables gauging the dynamics of ratings over time and is a better predictor of purchase than customer ratings from survey. The aggregate-level metric allows pinpointing actions that help or hurt experience. In sum, proxy ratings computed unobtrusively from clickstream, for every action, for each customer, and for every session can offer interpretable and more insightful alternative to surveys.
http://arxiv.org/abs/2006.06323v1
[ "Atanu R Sinha", "Deepali Jain", "Nikhil Sheoran", "Sopan Khosla", "Reshmi Sasidharan" ]
2020-06-11T10:41:07Z
2020-06-11T10:41:07Z
2001.03415
Multi-Agent Interactions Modeling with Correlated Policies
In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents' policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at url{https://github.com/apexrl/CoDAIL}.
http://arxiv.org/pdf/2001.03415v3
[ "Minghuan Liu", "Ming Zhou", "Weinan Zhang", "Yuzheng Zhuang", "Jun Wang", "Wulong Liu", "Yong Yu" ]
2020-06-11T11:22:24Z
2020-01-04T17:31:53Z
2006.06346
Latent Transformations for Discrete-Data Normalising Flows
Normalising flows (NFs) for discrete data are challenging because parameterising bijective transformations of discrete variables requires predicting discrete/integer parameters. Having a neural network architecture predict discrete parameters takes a non-differentiable activation function (eg, the step function) which precludes gradient-based learning. To circumvent this non-differentiability, previous work has employed biased proxy gradients, such as the straight-through estimator. We present an unbiased alternative where rather than deterministically parameterising one transformation, we predict a distribution over latent transformations. With stochastic transformations, the marginal likelihood of the data is differentiable and gradient-based learning is possible via score function estimation. To test the viability of discrete-data NFs we investigate performance on binary MNIST. We observe great challenges with both deterministic proxy gradients and unbiased score function estimation. Whereas the former often fails to learn even a shallow transformation, the variance of the latter could not be sufficiently controlled to admit deeper NFs.
http://arxiv.org/pdf/2006.06346v1
[ "Rob Hesselink", "Wilker Aziz" ]
2020-06-11T11:41:28Z
2020-06-11T11:41:28Z
2003.11958
StrokeCoder: Path-Based Image Generation from Single Examples using Transformers
This paper demonstrates how a Transformer Neural Network can be used to learn a Generative Model from a single path-based example image. We further show how a data set can be generated from the example image and how the model can be used to generate a large set of deviated images, which still represent the original image's style and concept.
http://arxiv.org/pdf/2003.11958v2
[ "Sabine Wieluch", "Friedhelm Schwenker" ]
2020-06-11T11:51:53Z
2020-03-26T14:55:16Z
2006.06385
TensorFlow with user friendly Graphical Framework for object detection API
TensorFlow is an open-source framework for deep learning dataflow and contains application programming interfaces (APIs) of voice analysis, natural language process, and computer vision. Especially, TensorFlow object detection API in computer vision field has been widely applied to technologies of agriculture, engineering, and medicine but barriers to entry of the framework usage is still high through command-line interface (CLI) and code for amateurs and beginners of information technology (IT) field. Therefore, this is aim to develop an user friendly Graphical Framework for object detection API on TensorFlow which is called TensorFlow Graphical Framework (TF-GraF). The TF-GraF provides independent virtual environments according to user accounts in server-side, additionally, execution of data preprocessing, training, and evaluation without CLI in client-side. Furthermore, hyperparameter setting, real-time observation of training process, object visualization of test images, and metrics evaluations of test data can also be operated via TF-GraF. Especially, TF-GraF supports flexible model selection of SSD, Faster-RCNN, RFCN, and Mask-RCNN including convolutional neural networks (inceptions and ResNets) through GUI environment. Consequently, TF-GraF allows anyone, even without any previous knowledge of deep learning frameworks, to design, train and deploy machine intelligence models without coding. Since TF-GraF takes care of setting and configuration, it allows anyone to use deep learning technology for their project without spending time to install complex software and environment.
http://arxiv.org/pdf/2006.06385v1
[ "Heemoon Yoon", "Sang-Hee Lee", "Mira Park" ]
2020-06-11T13:00:02Z
2020-06-11T13:00:02Z
2006.06392
Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding
Deep learning has shown great potential in image and video compression tasks. However, it brings bit savings at the cost of significant increases in coding complexity, which limits its potential for implementation within practical applications. In this paper, a novel neural network-based tool is presented which improves the interpolation of reference samples needed for fractional precision motion compensation. Contrary to previous efforts, the proposed approach focuses on complexity reduction achieved by interpreting the interpolation filters learned by the networks. When the approach is implemented in the Versatile Video Coding (VVC) test model, up to 4.5% BD-rate saving for individual sequences is achieved compared with the baseline VVC, while the complexity of learned interpolation is significantly reduced compared to the application of full neural network.
http://arxiv.org/abs/2006.06392v1
[ "Luka Murn", "Saverio Blasi", "Alan F. Smeaton", "Noel E. O'Connor", "Marta Mrak" ]
2020-06-11T13:10:20Z
2020-06-11T13:10:20Z
2006.06418
On mistakes we made in prior Computational Psychiatry Data driven approach projects and how they jeopardize translation of those findings in clinical practice
After performing comparison of the performance of seven different machine learning models on detection depression tasks to show that the choice of features is essential, we compare our methods and results with the published work of other researchers. In the end we summarize optimal practices in order that this useful classification solution can be translated to clinical practice with high accuracy and better acceptance.
http://arxiv.org/pdf/2006.06418v1
[ "Milena Čukić Radenković", "David Pokrajac", "Victoria Lopez" ]
2020-06-11T13:30:24Z
2020-06-11T13:30:24Z
2006.06443
Convolutional neural networks compression with low rank and sparse tensor decompositions
Convolutional neural networks show outstanding results in a variety of computer vision tasks. However, a neural network architecture design usually faces a trade-off between model performance and computational/memory complexity. For some real-world applications, it is crucial to develop models, which can be fast and light enough to run on edge systems and mobile devices. However, many modern architectures that demonstrate good performance don't satisfy inference time and storage limitation requirements. Thus, arises a problem of neural network compression to obtain a smaller and faster model, which is on par with the initial one. In this work, we consider a neural network compression method based on tensor decompositions. Namely, we propose to approximate the convolutional layer weight with a tensor, which can be represented as a sum of low-rank and sparse components. The motivation for such approximation is based on the assumption that low-rank and sparse terms allow eliminating two different types of redundancy and thus yield a better compression rate. An efficient CPU implementation for the proposed method has been developed. Our algorithm has demonstrated up to 3.5x CPU layer speedup and 11x layer size reduction when compressing Resnet50 architecture for the image classification task.
http://arxiv.org/pdf/2006.06443v1
[ "Pavel Kaloshin" ]
2020-06-11T13:53:18Z
2020-06-11T13:53:18Z
2004.02594
Data Manipulation: Towards Effective Instance Learning for Neural Dialogue Generation via Learning to Augment and Reweight
Current state-of-the-art neural dialogue models learn from human conversations following the data-driven paradigm. As such, a reliable training corpus is the crux of building a robust and well-behaved dialogue model. However, due to the open-ended nature of human conversations, the quality of user-generated training data varies greatly, and effective training samples are typically insufficient while noisy samples frequently appear. This impedes the learning of those data-driven neural dialogue models. Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples. In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously. In particular, the data manipulation model selectively augments the training samples and assigns an importance weight to each instance to reform the training data. Note that, the proposed data manipulation framework is fully data-driven and learnable. It not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples. Extensive experiments show that our framework can improve the dialogue generation performance with respect to various automatic evaluation metrics and human judgments.
http://arxiv.org/pdf/2004.02594v5
[ "Hengyi Cai", "Hongshen Chen", "Yonghao Song", "Cheng Zhang", "Xiaofang Zhao", "Dawei Yin" ]
2020-06-11T14:01:55Z
2020-04-06T12:14:09Z
2006.05609
Learning With Differential Privacy
The leakage of data might have been an extreme effect on the personal level if it contains sensitive information. Common prevention methods like encryption-decryption, endpoint protection, intrusion detection system are prone to leakage. Differential privacy comes to the rescue with a proper promise of protection against leakage, as it uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility. Differential privacy allows one to access the forest of data by describing their pattern of groups without disclosing any individual trees. The current adaption of differential privacy by leading tech companies and academia encourages authors to explore the topic in detail. The different aspects of differential privacy, it's application in privacy protection and leakage of information, a comparative discussion, on the current research approaches in this field, its utility in the real world as well as the trade-offs - will be discussed.
http://arxiv.org/pdf/2006.05609v2
[ "Poushali Sengupta", "Sudipta Paul", "Subhankar Mishra" ]
2020-06-11T14:11:44Z
2020-06-10T02:04:13Z
2006.06465
DNF-Net: A Neural Architecture for Tabular Data
A challenging open question in deep learning is how to handle tabular data. Unlike domains such as image and natural language processing, where deep architectures prevail, there is still no widely accepted neural architecture that dominates tabular data. As a step toward bridging this gap, we present DNF-Net a novel generic architecture whose inductive bias elicits models whose structure corresponds to logical Boolean formulas in disjunctive normal form (DNF) over affine soft-threshold decision terms. In addition, DNF-Net promotes localized decisions that are taken over small subsets of the features. We present an extensive empirical study showing that DNF-Nets significantly and consistently outperform FCNs over tabular data. With relatively few hyperparameters, DNF-Nets open the door to practical end-to-end handling of tabular data using neural networks. We present ablation studies, which justify the design choices of DNF-Net including the three inductive bias elements, namely, Boolean formulation, locality, and feature selection.
http://arxiv.org/pdf/2006.06465v1
[ "Ami Abutbul", "Gal Elidan", "Liran Katzir", "Ran El-Yaniv" ]
2020-06-11T14:21:45Z
2020-06-11T14:21:45Z
2006.06467
Learning Halfspaces with Tsybakov Noise
We study the efficient PAC learnability of halfspaces in the presence of Tsybakov noise. In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary. This noise model significantly generalizes the Massart noise model, by allowing the flipping probabilities to be arbitrarily close to $1/2$ for a fraction of the samples. Our main result is the first non-trivial PAC learning algorithm for this problem under a broad family of structured distributions -- satisfying certain concentration and (anti-)anti-concentration properties -- including log-concave distributions. Specifically, we given an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace, with quasi-polynomial runtime dependence in $1/epsilin$. The only previous upper bound for this problem -- even for the special case of log-concave distributions -- was doubly exponential in $1/epsilon$ (and follows via the naive reduction to agnostic learning). Our approach relies on a novel computationally efficient procedure to certify whether a candidate solution is near-optimal, based on semi-definite programming. We use this certificate procedure as a black-box and turn it into an efficient learning algorithm by searching over the space of halfspaces via online convex optimization.
http://arxiv.org/pdf/2006.06467v1
[ "Ilias Diakonikolas", "Vasilis Kontonis", "Christos Tzamos", "Nikos Zarifis" ]
2020-06-11T14:25:02Z
2020-06-11T14:25:02Z
1910.01526
Gated Linear Networks
This paper presents a new family of backpropagation-free neural architectures, Gated Linear Networks (GLNs). What distinguishes GLNs from contemporary neural networks is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target, forgoing the ability to learn feature representations in favor of rapid online learning. Individual neurons can model nonlinear functions via the use of data-dependent gating in conjunction with online convex optimization. We show that this architecture gives rise to universal learning capabilities in the limit, with effective model capacity increasing as a function of network size in a manner comparable with deep ReLU networks. Furthermore, we demonstrate that the GLN learning mechanism possesses extraordinary resilience to catastrophic forgetting, performing comparably to a MLP with dropout and Elastic Weight Consolidation on standard benchmarks. These desirable theoretical and empirical properties position GLNs as a complementary technique to contemporary offline deep learning methods.
http://arxiv.org/pdf/1910.01526v2
[ "Joel Veness", "Tor Lattimore", "David Budden", "Avishkar Bhoopchand", "Christopher Mattern", "Agnieszka Grabska-Barwinska", "Eren Sezener", "Jianan Wang", "Peter Toth", "Simon Schmitt", "Marcus Hutter" ]
2020-06-11T14:34:55Z
2019-09-30T18:02:26Z
2003.02278
Foreground model recognition through Neural Networks for CMB B-mode observations
In this work we present a Neural Network (NN) algorithm for the identification of the appropriate parametrization of diffuse polarized Galactic emissions in the context of Cosmic Microwave Background (CMB) $B$-mode multi-frequency observations. In particular, we have focused our analysis on low frequency foregrounds relevant for polarization observation: namely Galactic Synchrotron and Anomalous Microwave Emission (AME). We have implemented and tested our approach on a set of simulated maps corresponding to the frequency coverage and sensitivity represented by future satellite and low frequency ground based probes. The NN efficiency in recognizing the right parametrization of foreground emission in different sky regions reaches an accuracy of about $90%$. We have compared this performance with the $chi^{2}$ information following parametric foreground estimation using multi-frequency fitting, and quantify the gain provided by a NN approach. Our results show the relevance of model recognition in CMB $B$-mode observations, and highlight the exploitation of dedicated procedures to this purpose.
http://arxiv.org/abs/2003.02278v2
[ "Farida Farsian", "Nicoletta Krachmalnicoff", "Carlo Baccigalupi" ]
2020-06-11T14:42:51Z
2020-03-04T19:00:02Z
2004.05803
Adversarial Likelihood-Free Inference on Black-Box Generator
Generative Adversarial Network (GAN) can be viewed as an implicit estimator of a data distribution, and this perspective motivates using the adversarial concept in the true input parameter estimation of black-box generators. While previous works on likelihood-free inference introduces an implicit proposal distribution on the generator input, this paper analyzes theoretic limitations of the proposal distribution approach. On top of that, we introduce a new algorithm, Adversarial Likelihood-Free Inference (ALFI), to mitigate the analyzed limitations, so ALFI is able to find the posterior distribution on the input parameter for black-box generative models. We experimented ALFI with diverse simulation models as well as pre-trained statistical models, and we identified that ALFI achieves the best parameter estimation accuracy with a limited simulation budget.
http://arxiv.org/pdf/2004.05803v2
[ "Dongjun Kim", "Weonyoung Joo", "Seungjae Shin", "Kyungwoo Song", "Il-Chul Moon" ]
2020-06-11T14:50:27Z
2020-04-13T07:37:56Z
2006.06493
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks
In this work, we develop efficient disruptions of black-box image translation deepfake generation systems. We are the first to demonstrate black-box deepfake generation disruption by presenting image translation formulations of attacks initially proposed for classification models. Nevertheless, a naive adaptation of classification black-box attacks results in a prohibitive number of queries for image translation systems in the real-world. We present a frustratingly simple yet highly effective algorithm Leaking Universal Perturbations (LUP), that significantly reduces the number of queries needed to attack an image. LUP consists of two phases: (1) a short leaking phase where we attack the network using traditional black-box attacks and gather information on successful attacks on a small dataset and (2) and an exploitation phase where we leverage said information to subsequently attack the network with improved efficiency. Our attack reduces the total number of queries necessary to attack GANimation and StarGAN by 30%.
http://arxiv.org/pdf/2006.06493v1
[ "Nataniel Ruiz", "Sarah Adel Bargal", "Stan Sclaroff" ]
2020-06-11T15:02:27Z
2020-06-11T15:02:27Z
1812.11183
Reproducible evaluation of diffusion MRI features for automatic classification of patients with Alzheimers disease
Diffusion MRI is the modality of choice to study alterations of white matter. In past years, various works have used diffusion MRI for automatic classification of AD. However, classification performance obtained with different approaches is difficult to compare and these studies are also difficult to reproduce. In the present paper, we first extend a previously proposed framework to diffusion MRI data for AD classification. Specifically, we add: conversion of diffusion MRI ADNI data into the BIDS standard and pipelines for diffusion MRI preprocessing and feature extraction. We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly, voxel-wise features generally gives better performance than regional features. Fractional anisotropy (FA) and mean diffusivity (MD) provided comparable results for voxel-wise features. Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate that using non-nested validation of FS leads to unreliable and over-optimistic results: 0.05 up to 0.40 relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI. All the code of the framework and the experiments are publicly available: general-purpose tools have been integrated into the Clinica software package (www.clinica.run) and the paper-specific code is available at: https://github.com/aramis-lab/AD-ML.
http://arxiv.org/pdf/1812.11183v4
[ "Junhao Wen", "Jorge Samper-Gonzalez", "Simona Bottani", "Alexandre Routier", "Ninon Burgos", "Thomas Jacquemont", "Sabrina Fontanella", "Stanley Durrleman", "Stephane Epelbaum", "Anne Bertrand", "Olivier Colliot" ]
2020-06-11T15:07:45Z
2018-12-28T17:11:28Z
1811.06580
Subspace Clustering through Sub-Clusters
The problem of dimension reduction is of increasing importance in modern data analysis. In this paper, we consider modeling the collection of points in a high dimensional space as a union of low dimensional subspaces. In particular we propose a highly scalable sampling based algorithm that clusters the entire data via first spectral clustering of a small random sample followed by classifying or labeling the remaining out of sample points. The key idea is that this random subset borrows information across the entire data set and that the problem of clustering points can be replaced with the more efficient and robust problem of "clustering sub-clusters". We provide theoretical guarantees for our procedure. The numerical results indicate we outperform other state-of-the-art subspace clustering algorithms with respect to accuracy and speed.
http://arxiv.org/pdf/1811.06580v2
[ "Weiwei Li", "Jan Hannig", "Sayan Mukherjee" ]
2020-06-11T15:18:06Z
2018-11-15T20:15:53Z
2006.05138
Sparse Dynamic Distribution Decomposition: Efficient Integration of Trajectory and Snapshot Time Series Data
Dynamic Distribution Decomposition (DDD) was introduced in Taylor-King et. al. (PLOS Comp Biol, 2020) as a variation on Dynamic Mode Decomposition. In brief, by using basis functions over a continuous state space, DDD allows for the fitting of continuous-time Markov chains over these basis functions and as a result continuously maps between distributions. The number of parameters in DDD scales by the square of the number of basis functions; we reformulate the problem and restrict the method to compact basis functions which leads to the inference of sparse matrices only -- hence reducing the number of parameters. Finally, we demonstrate how DDD is suitable to integrate both trajectory time series (paired between subsequent time points) and snapshot time series (unpaired time points). Methods capable of integrating both scenarios are particularly relevant for the analysis of biomedical data, whereby studies observe population at fixed time points (snapshots) and individual patient journeys with repeated follow ups (trajectories).
http://arxiv.org/pdf/2006.05138v2
[ "Jake P. Taylor-King", "Cristian Regep", "Jyothish Soman", "Flawnson Tong", "Catalina Cangea", "Charlie Roberts" ]
2020-06-11T15:25:30Z
2020-06-09T09:28:52Z
2006.09184
The Number of Confirmed Cases of Covid-19 by using Machine Learning: Methods and Challenges
Covid-19 is one of the biggest health challenges that the world has ever faced. Public health policy makers need the reliable prediction of the confirmed cases in future to plan medical facilities. Machine learning methods learn from the historical data and make a prediction about the event. Machine learning methods have been used to predict the number of confirmed cases of Covid-19. In this paper, we present a detailed review of these research papers. We present a taxonomy that groups them in four categories. We further present the challenges in this field. We provide suggestions to the machine learning practitioners to improve the performance of machine learning methods for the prediction of confirmed cases of Covid-19.
http://arxiv.org/pdf/2006.09184v1
[ "Amir Ahmada", "Sunita Garhwal", "Santosh Kumar Ray", "Gagan Kumar", "Sharaf J. Malebary", "Omar Mohammed Omar Barukab" ]
2020-06-11T15:34:59Z
2020-06-11T15:34:59Z
2006.06526
Recurrent Neural Networks for Handover Management in Next-Generation Self-Organized Networks
In this paper, we discuss a handover management scheme for Next Generation Self-Organized Networks. We propose to extract experience from full protocol stack data, to make smart handover decisions in a multi-cell scenario, where users move and are challenged by deep zones of an outage. Traditional handover schemes have the drawback of taking into account only the signal strength from the serving, and the target cell, before the handover. However, we believe that the expected Quality of Experience (QoE) resulting from the decision of target cell to handover to, should be the driving principle of the handover decision. In particular, we propose two models based on multi-layer many-to-one LSTM architecture, and a multi-layer LSTM AutoEncoder (AE) in conjunction with a MultiLayer Perceptron (MLP) neural network. We show that using experience extracted from data, we can improve the number of users finalizing the download by 18%, and we can reduce the time to download, with respect to a standard event-based handover benchmark scheme. Moreover, for the sake of generalization, we test the LSTM Autoencoder in a different scenario, where it maintains its performance improvements with a slight degradation, compared to the original scenario.
http://arxiv.org/pdf/2006.06526v1
[ "Zoraze Ali", "Marco Miozzo", "Lorenza Giupponi", "Paolo Dini", "Stojan Denic", "Stavroula Vassaki" ]
2020-06-11T15:41:12Z
2020-06-11T15:41:12Z
2005.05868
Recurrent and Spiking Modeling of Sparse Surgical Kinematics
Robot-assisted minimally invasive surgery is improving surgeon performance and patient outcomes. This innovation is also turning what has been a subjective practice into motion sequences that can be precisely measured. A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots. In these studies, models are typically trained on benchmark datasets for representative surgical tasks to assess surgeon skill levels. While they have shown that novices and experts can be accurately classified, it is not clear whether machine learning can separate highly proficient surgeons from one another, especially without video data. In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels. We focus on a new dataset created from surgical exercises on a simulation device for skill training. A simple, efficient encoding scheme was devised to encode kinematic sequences so that they were amenable to edge learning. We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone. Further, our model could be converted to a spiking neural network to train and infer on the Nengo simulation framework with no loss in accuracy. Overall, this study suggests that building neuromorphic models from sparse motion features may be a potentially useful strategy for identifying surgeons and gestures with chips deployed on robotic systems to offer adaptive assistance during surgery and training with additional latency and privacy benefits.
http://arxiv.org/pdf/2005.05868v2
[ "Neil Getty", "Zixuan Zhao", "Stephan Gruessner", "Liaohai Chen", "Fangfang Xia" ]
2020-06-11T16:01:48Z
2020-05-12T15:41:45Z
2006.06553
Stanza: A Nonlinear State Space Model for Probabilistic Inference in Non-Stationary Time Series
Time series with long-term structure arise in a variety of contexts and capturing this temporal structure is a critical challenge in time series analysis for both inference and forecasting settings. Traditionally, state space models have been successful in providing uncertainty estimates of trajectories in the latent space. More recently, deep learning, attention-based approaches have achieved state of the art performance for sequence modeling, though often require large amounts of data and parameters to do so. We propose Stanza, a nonlinear, non-stationary state space model as an intermediate approach to fill the gap between traditional models and modern deep learning approaches for complex time series. Stanza strikes a balance between competitive forecasting accuracy and probabilistic, interpretable inference for highly structured time series. In particular, Stanza achieves forecasting accuracy competitive with deep LSTMs on real-world datasets, especially for multi-step ahead forecasting.
http://arxiv.org/pdf/2006.06553v1
[ "Anna K. Yanchenko", "Sayan Mukherjee" ]
2020-06-11T16:06:35Z
2020-06-11T16:06:35Z