text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Training images with data transformations have been suggested as contrastive
examples to complement the testing set for generalization performance
evaluation of deep neural networks (DNNs). In this work, we propose a practical
framework ContRE (The word "contre" means "against" or "versus" in French.)
that uses Contrastive examples for DNN geneRalization performance Estimation.
Specifically, ContRE follows the assumption in contrastive learning that robust
DNN models with good generalization performance are capable of extracting a
consistent set of features and making consistent predictions from the same
image under varying data transformations. Incorporating with a set of
randomized strategies for well-designed data transformations over the training
set, ContRE adopts classification errors and Fisher ratios on the generated
contrastive examples to assess and analyze the generalization performance of
deep models in complement with a testing set. To show the effectiveness and the
efficiency of ContRE, extensive experiments have been done using various DNN
models on three open source benchmark datasets with thorough ablation studies
and applicability analyses. Our experiment results confirm that (1) behaviors
of deep models on contrastive examples are strongly correlated to what on the
testing set, and (2) ContRE is a robust measure of generalization performance
complementing to the testing set in various settings. | [
"cs.LG",
"cs.CV"
] |
This paper addresses the task of estimating the 6 degrees of freedom pose of
a known 3D object from depth information represented by a point cloud. Deep
features learned by convolutional neural networks from color information have
been the dominant features to be used for inferring object poses, while depth
information receives much less attention. However, depth information contains
rich geometric information of the object shape, which is important for
inferring the object pose. We use depth information represented by point clouds
as the input to both deep networks and geometry-based pose refinement and use
separate networks for rotation and translation regression. We argue that the
axis-angle representation is a suitable rotation representation for deep
learning, and use a geodesic loss function for rotation regression. Ablation
studies show that these design choices outperform alternatives such as the
quaternion representation and L2 loss, or regressing translation and rotation
with the same network. Our simple yet effective approach clearly outperforms
state-of-the-art methods on the YCB-video dataset. The implementation and
trained model are avaliable at: https://github.com/GeeeG/CloudPose. | [
"cs.CV"
] |
A code generation system generates programming language code based on an
input natural language description. State-of-the-art approaches rely on neural
networks for code generation. However, these code generators suffer from two
problems. One is the long dependency problem, where a code element often
depends on another far-away code element. A variable reference, for example,
depends on its definition, which may appear quite a few lines before. The other
problem is structure modeling, as programs contain rich structural information.
In this paper, we propose a novel tree-based neural architecture, TreeGen, for
code generation. TreeGen uses the attention mechanism of Transformers to
alleviate the long-dependency problem, and introduces a novel AST reader
(encoder) to incorporate grammar rules and AST structures into the network. We
evaluated TreeGen on a Python benchmark, HearthStone, and two semantic parsing
benchmarks, ATIS and GEO. TreeGen outperformed the previous state-of-the-art
approach by 4.5 percentage points on HearthStone, and achieved the best
accuracy among neural network-based approaches on ATIS (89.1%) and GEO (89.6%).
We also conducted an ablation test to better understand each component of our
model. | [
"cs.LG",
"cs.SE"
] |
In this paper, we consider the framework of multi-task representation (MTR)
learning where the goal is to use source tasks to learn a representation that
reduces the sample complexity of solving a target task. We start by reviewing
recent advances in MTR theory and show that they can provide novel insights for
popular meta-learning algorithms when analyzed within this framework. In
particular, we highlight a fundamental difference between gradient-based and
metric-based algorithms and put forward a theoretical analysis to explain it.
Finally, we use the derived insights to improve the generalization capacity of
meta-learning methods via a new spectral-based regularization term and confirm
its efficiency through experimental studies on classic few-shot classification
and continual learning benchmarks. To the best of our knowledge, this is the
first contribution that puts the most recent learning bounds of MTR theory into
practice of training popular meta-learning methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Knowledge distillation is an effective method to transfer the knowledge from
the cumbersome teacher model to the lightweight student model. Online knowledge
distillation uses the ensembled prediction results of multiple student models
as soft targets to train each student model. However, the homogenization
problem will lead to difficulty in further improving model performance. In this
work, we propose a new distillation method to enhance the diversity among
multiple student models. We introduce Feature Fusion Module (FFM), which
improves the performance of the attention mechanism in the network by
integrating rich semantic information contained in the last block of multiple
student models. Furthermore, we use the Classifier Diversification(CD) loss
function to strengthen the differences between the student models and deliver a
better ensemble result. Extensive experiments proved that our method
significantly enhances the diversity among student models and brings better
distillation performance. We evaluate our method on three image classification
datasets: CIFAR-10/100 and CINIC-10. The results show that our method achieves
state-of-the-art performance on these datasets. | [
"cs.CV",
"cs.LG"
] |
Recurrent neural networks (RNNs) provide state-of-the-art performance in
processing sequential data but are memory intensive to train, limiting the
flexibility of RNN models which can be trained. Reversible RNNs---RNNs for
which the hidden-to-hidden transition can be reversed---offer a path to reduce
the memory requirements of training, as hidden states need not be stored and
instead can be recomputed during backpropagation. We first show that perfectly
reversible RNNs, which require no storage of the hidden activations, are
fundamentally limited because they cannot forget information from their hidden
state. We then provide a scheme for storing a small number of bits in order to
allow perfect reversal with forgetting. Our method achieves comparable
performance to traditional models while reducing the activation memory cost by
a factor of 10--15. We extend our technique to attention-based
sequence-to-sequence models, where it maintains performance while reducing
activation memory cost by a factor of 5--10 in the encoder, and a factor of
10--15 in the decoder. | [
"cs.LG",
"stat.ML"
] |
With the introduction of the variational autoencoder (VAE), probabilistic
latent variable models have received renewed attention as powerful generative
models. However, their performance in terms of test likelihood and quality of
generated samples has been surpassed by autoregressive models without
stochastic units. Furthermore, flow-based models have recently been shown to be
an attractive alternative that scales well to high-dimensional data. In this
paper we close the performance gap by constructing VAE models that can
effectively utilize a deep hierarchy of stochastic variables and model complex
covariance structures. We introduce the Bidirectional-Inference Variational
Autoencoder (BIVA), characterized by a skip-connected generative model and an
inference network formed by a bidirectional stochastic inference path. We show
that BIVA reaches state-of-the-art test likelihoods, generates sharp and
coherent natural images, and uses the hierarchy of latent variables to capture
different aspects of the data distribution. We observe that BIVA, in contrast
to recent results, can be used for anomaly detection. We attribute this to the
hierarchy of latent variables which is able to extract high-level semantic
features. Finally, we extend BIVA to semi-supervised classification tasks and
show that it performs comparably to state-of-the-art results by generative
adversarial networks. | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
Deep Neural Networks have achieved remarkable success relying on the
developing high computation capability of GPUs and large-scale datasets with
increasing network depth and width in image recognition, object detection and
many other applications. However, due to the expensive computation and
intensive memory, researchers have concentrated on designing compression
methods in recent years. In this paper, we briefly summarize the existing
advanced techniques that are useful in model compression at first. After that,
we give a detailed description on group lasso regularization and its variants.
More importantly, we propose an improving framework of partial regularization
based on the relationship between neurons and connections of adjacent layers.
It is reasonable and feasible with the help of permutation property of neural
network . Experiment results show that partial regularization methods brings
improvements such as higher classification accuracy in both training and
testing stages on multiple datasets. Since our regularizers contain the
computation of less parameters, it shows competitive performances in terms of
the total running time of experiments. Finally, we analysed the results and
draw a conclusion that the optimal network structure must exist and depend on
the input data. | [
"cs.LG",
"stat.ML"
] |
In this paper, a novel model of 3D elastic mesh is presented for image
segmentation. The model is inspired by stress and strain in physical elastic
objects, while the repulsive force and elastic force in the model are defined
slightly different from the physical force to suit the segmentation problem
well. The self-balancing mechanism in the model guarantees the stability of the
method in segmentation. The shape of the elastic mesh at balance state is used
for region segmentation, in which the sign distribution of the points'z
coordinate values is taken as the basis for segmentation. The effectiveness of
the proposed method is proved by analysis and experimental results for both
test images and real world images. | [
"cs.CV"
] |
In this work, we provide an efficient and realistic data-driven approach to
simulate astronomical images using deep generative models from machine
learning. Our solution is based on a variant of the generative adversarial
network (GAN) with progressive training methodology and Wasserstein cost
function. The proposed solution generates naturalistic images of galaxies that
show complex structures and high diversity, which suggests that data-driven
simulations using machine learning can replace many of the expensive
model-driven methods used in astronomical data processing. | [
"cs.LG",
"astro-ph.GA",
"eess.IV",
"stat.ML"
] |
Exploiting capacity of sewer system using decentralized control is a cost
effective mean of minimizing the overflow. Given the size of the real sewer
system, exploiting all the installed control structures in the sewer pipes can
be challenging. This paper presents a divide and conquer solution to implement
decentralized control measures based on unsupervised learning algorithms. A
sewer system is first divided into a number of subcatchments. A series of
natural and built factors that have the impact on sewer system performance is
then collected. Clustering algorithms are then applied to grouping
subcatchments with similar hydraulic hydrologic characteristics. Following
which, principal component analysis is performed to interpret the main features
of sub-catchment groups and identify priority control locations. Overflows
under different control scenarios are compared based on the hydraulic model.
Simulation results indicate that priority control applied to the most suitable
cluster could bring the most profitable result. | [
"cs.LG",
"stat.ML"
] |
A common belief in model-free reinforcement learning is that methods based on
random search in the parameter space of policies exhibit significantly worse
sample complexity than those that explore the space of actions. We dispel such
beliefs by introducing a random search method for training static, linear
policies for continuous control problems, matching state-of-the-art sample
efficiency on the benchmark MuJoCo locomotion tasks. Our method also finds a
nearly optimal controller for a challenging instance of the Linear Quadratic
Regulator, a classical problem in control theory, when the dynamics are not
known. Computationally, our random search algorithm is at least 15 times more
efficient than the fastest competing model-free methods on these benchmarks. We
take advantage of this computational efficiency to evaluate the performance of
our method over hundreds of random seeds and many different hyperparameter
configurations for each benchmark task. Our simulations highlight a high
variability in performance in these benchmark tasks, suggesting that commonly
used estimations of sample efficiency do not adequately evaluate the
performance of RL algorithms. | [
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] |
We developed a rich dataset of Chest X-Ray (CXR) images to assist
investigators in artificial intelligence. The data were collected using an eye
tracking system while a radiologist reviewed and reported on 1,083 CXR images.
The dataset contains the following aligned data: CXR image, transcribed
radiology report text, radiologist's dictation audio and eye gaze coordinates
data. We hope this dataset can contribute to various areas of research
particularly towards explainable and multimodal deep learning / machine
learning methods. Furthermore, investigators in disease classification and
localization, automated radiology report generation, and human-machine
interaction can benefit from these data. We report deep learning experiments
that utilize the attention maps produced by eye gaze dataset to show the
potential utility of this data. | [
"cs.CV"
] |
This work introduces pyramidal convolution (PyConv), which is capable of
processing the input at multiple filter scales. PyConv contains a pyramid of
kernels, where each level involves different types of filters with varying size
and depth, which are able to capture different levels of details in the scene.
On top of these improved recognition capabilities, PyConv is also efficient
and, with our formulation, it does not increase the computational cost and
parameters compared to standard convolution. Moreover, it is very flexible and
extensible, providing a large space of potential network architectures for
different applications. PyConv has the potential to impact nearly every
computer vision task and, in this work, we present different architectures
based on PyConv for four main tasks on visual recognition: image
classification, video action classification/recognition, object detection and
semantic image segmentation/parsing. Our approach shows significant
improvements over all these core tasks in comparison with the baselines. For
instance, on image recognition, our 50-layers network outperforms in terms of
recognition performance on ImageNet dataset its counterpart baseline ResNet
with 152 layers, while having 2.39 times less parameters, 2.52 times lower
computational complexity and more than 3 times less layers. On image
segmentation, our novel framework sets a new state-of-the-art on the
challenging ADE20K benchmark for scene parsing. Code is available at:
https://github.com/iduta/pyconv | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
A cloud server spent a lot of time, energy and money to train a Viola-Jones
type object detector with high accuracy. Clients can upload their photos to the
cloud server to find objects. However, the client does not want the leakage of
the content of his/her photos. In the meanwhile, the cloud server is also
reluctant to leak any parameters of the trained object detectors. 10 years ago,
Avidan & Butman introduced Blind Vision, which is a method for securely
evaluating a Viola-Jones type object detector. Blind Vision uses standard
cryptographic tools and is painfully slow to compute, taking a couple of hours
to scan a single image. The purpose of this work is to explore an efficient
method that can speed up the process. We propose the Random Base Image (RBI)
Representation. The original image is divided into random base images. Only the
base images are submitted randomly to the cloud server. Thus, the content of
the image can not be leaked. In the meanwhile, a random vector and the secure
Millionaire protocol are leveraged to protect the parameters of the trained
object detector. The RBI makes the integral-image enable again for the great
acceleration. The experimental results reveal that our method can retain the
detection accuracy of that of the plain vision algorithm and is significantly
faster than the traditional blind vision, with only a very low probability of
the information leakage theoretically. | [
"cs.CV"
] |
In this paper, we propose a novel loss function for training Generative
Adversarial Networks (GANs) aiming towards deeper theoretical understanding as
well as improved stability and performance for the underlying optimization
problem. The new loss function is based on cumulant generating functions giving
rise to \emph{Cumulant GAN}. Relying on a recently-derived variational formula,
we show that the corresponding optimization problem is equivalent to R{\'e}nyi
divergence minimization, thus offering a (partially) unified perspective of GAN
losses: the R{\'e}nyi family encompasses Kullback-Leibler divergence (KLD),
reverse KLD, Hellinger distance and $\chi^2$-divergence. Wasserstein GAN is
also a member of cumulant GAN. In terms of stability, we rigorously prove the
linear convergence of cumulant GAN to the Nash equilibrium for a linear
discriminator, Gaussian distributions and the standard gradient descent ascent
algorithm. Finally, we experimentally demonstrate that image generation is more
robust relative to Wasserstein GAN and it is substantially improved in terms of
both inception score and Fr\'echet inception distance when both weaker and
stronger discriminators are considered. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Event cameras are activity-driven bio-inspired vision sensors, thereby
resulting in advantages such as sparsity,high temporal resolution, low latency,
and power consumption. Given the different sensing modality of event camera and
high quality of conventional vision paradigm, event processing is predominantly
solved by transforming the sparse and asynchronous events into 2D grid and
subsequently applying standard vision pipelines. Despite the promising results
displayed by supervised learning approaches in 2D grid generation, these
approaches treat the task in supervised manner. Labeled task specific ground
truth event data is challenging to acquire. To overcome this limitation, we
propose Event-LSTM, an unsupervised Auto-Encoder architecture made up of LSTM
layers as a promising alternative to learn 2D grid representation from event
sequence. Compared to competing supervised approaches, ours is a task-agnostic
approach ideally suited for the event domain, where task specific labeled data
is scarce. We also tailor the proposed solution to exploit asynchronous nature
of event stream, which gives it desirable charateristics such as speed
invariant and energy-efficient 2D grid generation. Besides, we also push
state-of-the-art event de-noising forward by introducing memory into the
de-noising process. Evaluations on activity recognition and gesture recognition
demonstrate that our approach yields improvement over state-of-the-art
approaches, while providing the flexibilty to learn from unlabelled data. | [
"cs.CV"
] |
Most recent successes on forecasting the people motion are based on LSTM
models and all most recent progress has been achieved by modelling the social
interaction among people and the people interaction with the scene. We question
the use of the LSTM models and propose the novel use of Transformer Networks
for trajectory forecasting. This is a fundamental switch from the sequential
step-by-step processing of LSTMs to the only-attention-based memory mechanisms
of Transformers. In particular, we consider both the original Transformer
Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art
on all natural language processing tasks. Our proposed Transformers predict the
trajectories of the individual people in the scene. These are "simple" model
because each person is modelled separately without any complex human-human nor
scene interaction terms. In particular, the TF model without bells and whistles
yields the best score on the largest and most challenging trajectory
forecasting benchmark of TrajNet. Additionally, its extension which predicts
multiple plausible future trajectories performs on par with more engineered
techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers
may deal with missing observations, as it may be the case with real sensor
data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer. | [
"cs.CV"
] |
Solid texture synthesis (STS), as an effective way to extend 2D exemplar to a
3D solid volume, exhibits advantages in numerous application domains. However,
existing methods generally synthesize solid texture with specific features,
which may result in the failure of capturing diversified textural information.
In this paper, we propose a novel generative adversarial nets-based approach
(STS-GAN) to hierarchically learn solid texture with a feature-free nature. Our
multi-scale discriminators evaluate the similarity between patch from exemplar
and slice from the generated volume, promoting the generator to synthesize
realistic solid textures. Experimental results demonstrate that the proposed
method can generate high-quality solid textures with similar visual
characteristics to the exemplar. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Anticipating future events is an important prerequisite towards intelligent
behavior. Video forecasting has been studied as a proxy task towards this goal.
Recent work has shown that to predict semantic segmentation of future frames,
forecasting at the semantic level is more effective than forecasting RGB frames
and then segmenting these. In this paper we consider the more challenging
problem of future instance segmentation, which additionally segments out
individual objects. To deal with a varying number of output labels per image,
we develop a predictive model in the space of fixed-sized convolutional
features of the Mask R-CNN instance segmentation model. We apply the "detection
head'" of Mask R-CNN on the predicted features to produce the instance
segmentation of future frames. Experiments show that this approach
significantly improves over strong baselines based on optical flow and
repurposed instance segmentation architectures. | [
"cs.CV"
] |
Generative adversarial networks (GANs) have received a tremendous amount of
attention in the past few years, and have inspired applications addressing a
wide range of problems. Despite its great potential, GANs are difficult to
train. Recently, a series of papers (Arjovsky & Bottou, 2017a; Arjovsky et al.
2017b; and Gulrajani et al. 2017) proposed using Wasserstein distance as the
training objective and promised easy, stable GAN training across architectures
with minimal hyperparameter tuning. In this paper, we compare the performance
of Wasserstein distance with other training objectives on a variety of GAN
architectures in the context of single image super-resolution. Our results
agree that Wasserstein GAN with gradient penalty (WGAN-GP) provides stable and
converging GAN training and that Wasserstein distance is an effective metric to
gauge training progress. | [
"cs.LG",
"stat.ML"
] |
Time series models with recurrent neural networks (RNNs) can have high
accuracy but are unfortunately difficult to interpret as a result of
feature-interactions, temporal-interactions, and non-linear transformations.
Interpretability is important in domains like healthcare where constructing
models that provide insight into the relationships they have learned are
required to validate and trust model predictions. We want accurate time series
models where users can understand the contribution of individual input
features. We present the Interpretable-RNN (I-RNN) that balances model
complexity and accuracy by forcing the relationship between variables in the
model to be additive. Interactions are restricted between hidden states of the
RNN and additively combined at the final step. I-RNN specifically captures the
unique characteristics of clinical time series, which are unevenly sampled in
time, asynchronously acquired, and have missing data. Importantly, the hidden
state activations represent feature coefficients that correlate with the
prediction target and can be visualized as risk curves that capture the global
relationship between individual input features and the outcome. We evaluate the
I-RNN model on the Physionet 2012 Challenge dataset to predict in-hospital
mortality, and on a real-world clinical decision support task: predicting
hemodynamic interventions in the intensive care unit. I-RNN provides
explanations in the form of global and local feature importances comparable to
highly intelligible models like decision trees trained on hand-engineered
features while significantly outperforming them. I-RNN remains intelligible
while providing accuracy comparable to state-of-the-art decay-based and
interpolation-based recurrent time series models. The experimental results on
real-world clinical datasets refute the myth that there is a tradeoff between
accuracy and interpretability. | [
"cs.LG",
"cs.AI"
] |
We consider the problem of learning decision rules for prediction with
feature budget constraint. In particular, we are interested in pruning an
ensemble of decision trees to reduce expected feature cost while maintaining
high prediction accuracy for any test example. We propose a novel 0-1 integer
program formulation for ensemble pruning. Our pruning formulation is general -
it takes any ensemble of decision trees as input. By explicitly accounting for
feature-sharing across trees together with accuracy/cost trade-off, our method
is able to significantly reduce feature cost by pruning subtrees that introduce
more loss in terms of feature cost than benefit in terms of prediction accuracy
gain. Theoretically, we prove that a linear programming relaxation produces the
exact solution of the original integer program. This allows us to use efficient
convex optimization tools to obtain an optimally pruned ensemble for any given
budget. Empirically, we see that our pruning algorithm significantly improves
the performance of the state of the art ensemble method BudgetRF. | [
"stat.ML",
"cs.LG"
] |
This short article revisits some of the ideas introduced in arXiv:1701.07875
and arXiv:1705.07642 in a simple setup. This sheds some lights on the
connexions between Variational Autoencoders (VAE), Generative Adversarial
Networks (GAN) and Minimum Kantorovitch Estimators (MKE). | [
"stat.ML"
] |
Estimating 3D human pose from a single image suffers from severe ambiguity
since multiple 3D joint configurations may have the same 2D projection. The
state-of-the-art methods often rely on context modeling methods such as
pictorial structure model (PSM) or graph neural network (GNN) to reduce
ambiguity. However, there is no study that rigorously compares them side by
side. So we first present a general formula for context modeling in which both
PSM and GNN are its special cases. By comparing the two methods, we found that
the end-to-end training scheme in GNN and the limb length constraints in PSM
are two complementary factors to improve results. To combine their advantages,
we propose ContextPose based on attention mechanism that allows enforcing soft
limb length constraints in a deep network. The approach effectively reduces the
chance of getting absurd 3D pose estimates with incorrect limb lengths and
achieves state-of-the-art results on two benchmark datasets. More importantly,
the introduction of limb length constraints into deep networks enables the
approach to achieve much better generalization performance. | [
"cs.CV"
] |
Episodic memory is a psychology term which refers to the ability to recall
specific events from the past. We suggest one advantage of this particular type
of memory is the ability to easily assign credit to a specific state when
remembered information is found to be useful. Inspired by this idea, and the
increasing popularity of external memory mechanisms to handle long-term
dependencies in deep learning systems, we propose a novel algorithm which uses
a reservoir sampling procedure to maintain an external memory consisting of a
fixed number of past states. The algorithm allows a deep reinforcement learning
agent to learn online to preferentially remember those states which are found
to be useful to recall later on. Critically this method allows for efficient
online computation of gradient estimates with respect to the write process of
the external memory. Thus unlike most prior mechanisms for external memory it
is feasible to use in an online reinforcement learning setting. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Efficient processing of large-scale time series data is an intricate problem
in machine learning. Conventional sensor signal processing pipelines with hand
engineered feature extraction often involve huge computational cost with high
dimensional data. Deep recurrent neural networks have shown promise in
automated feature learning for improved time-series processing. However,
generic deep recurrent models grow in scale and depth with increased complexity
of the data. This is particularly challenging in presence of high dimensional
data with temporal and spatial characteristics. Consequently, this work
proposes a novel deep cellular recurrent neural network (DCRNN) architecture to
efficiently process complex multi-dimensional time series data with spatial
information. The cellular recurrent architecture in the proposed model allows
for location-aware synchronous processing of time series data from spatially
distributed sensor signal sources. Extensive trainable parameter sharing due to
cellularity in the proposed architecture ensures efficiency in the use of
recurrent processing units with high-dimensional inputs. This study also
investigates the versatility of the proposed DCRNN model for classification of
multi-class time series data from different application domains. Consequently,
the proposed DCRNN architecture is evaluated using two time-series datasets: a
multichannel scalp EEG dataset for seizure detection, and a machine fault
detection dataset obtained in-house. The results suggest that the proposed
architecture achieves state-of-the-art performance while utilizing
substantially less trainable parameters when compared to comparable methods in
the literature. | [
"cs.LG",
"cs.CV",
"eess.SP"
] |
Multi-class 3D object detection aims to localize and classify objects of
multiple categories from point clouds. Due to the nature of point clouds, i.e.
unstructured, sparse and noisy, some features benefit-ting multi-class
discrimination are underexploited, such as shape information. In this paper, we
propose a novel 3D shape signature to explore the shape information from point
clouds. By incorporating operations of symmetry, convex hull and chebyshev
fitting, the proposed shape sig-nature is not only compact and effective but
also robust to the noise, which serves as a soft constraint to improve the
feature capability of multi-class discrimination. Based on the proposed shape
signature, we develop the shape signature networks (SSN) for 3D object
detection, which consist of pyramid feature encoding part, shape-aware grouping
heads and explicit shape encoding objective. Experiments show that the proposed
method performs remarkably better than existing methods on two large-scale
datasets. Furthermore, our shape signature can act as a plug-and-play component
and ablation study shows its effectiveness and good scalability | [
"cs.CV"
] |
Neural architecture search (NAS) enables researchers to automatically explore
broad design spaces in order to improve efficiency of neural networks. This
efficiency is especially important in the case of on-device deployment, where
improvements in accuracy should be balanced out with computational demands of a
model. In practice, performance metrics of model are computationally expensive
to obtain. Previous work uses a proxy (e.g., number of operations) or a
layer-wise measurement of neural network layers to estimate end-to-end hardware
performance but the imprecise prediction diminishes the quality of NAS. To
address this problem, we propose BRP-NAS, an efficient hardware-aware NAS
enabled by an accurate performance predictor-based on graph convolutional
network (GCN). What is more, we investigate prediction quality on different
metrics and show that sample efficiency of the predictor-based NAS can be
improved by considering binary relations of models and an iterative data
selection strategy. We show that our proposed method outperforms all prior
methods on NAS-Bench-101 and NAS-Bench-201, and that our predictor can
consistently learn to extract useful features from the DARTS search space,
improving upon the second-order baseline. Finally, to raise awareness of the
fact that accurate latency estimation is not a trivial task, we release
LatBench -- a latency dataset of NAS-Bench-201 models running on a broad range
of devices. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Disentangled Graph Convolutional Network (DisenGCN) is an encouraging
framework to disentangle the latent factors arising in a real-world graph.
However, it relies on disentangling information heavily from a local range
(i.e., a node and its 1-hop neighbors), while the local information in many
cases can be uneven and incomplete, hindering the interpretabiliy power and
model performance of DisenGCN. In this paper, we introduce a novel Local and
Global Disentangled Graph Convolutional Network (LGD-GCN) to capture both local
and global information for graph disentanglement. LGD-GCN performs a
statistical mixture modeling to derive a factor-aware latent continuous space,
and then constructs different structures w.r.t. different factors from the
revealed space. In this way, the global factor-specific information can be
efficiently and selectively encoded via a message passing along these built
structures, strengthening the intra-factor consistency. We also propose a novel
diversity promoting regularizer employed with the latent space modeling, to
encourage inter-factor diversity. Evaluations of the proposed LGD-GCN on the
synthetic and real-world datasets show a better interpretability and improved
performance in node classification over the existing competitive models. | [
"cs.LG"
] |
In the continual effort to improve product quality and decrease operations
costs, computational modeling is increasingly being deployed to determine
feasibility of product designs or configurations. Surrogate modeling of these
computer experiments via local models, which induce sparsity by only
considering short range interactions, can tackle huge analyses of complicated
input-output relationships. However, narrowing focus to local scale means that
global trends must be re-learned over and over again. In this article, we
propose a framework for incorporating information from a global sensitivity
analysis into the surrogate model as an input rotation and rescaling
preprocessing step. We discuss the relationship between several sensitivity
analysis methods based on kernel regression before describing how they give
rise to a transformation of the input variables. Specifically, we perform an
input warping such that the "warped simulator" is equally sensitive to all
input directions, freeing local models to focus on local dynamics. Numerical
experiments on observational data and benchmark test functions, including a
high-dimensional computer simulator from the automotive industry, provide
empirical validation. | [
"stat.ML",
"cs.LG"
] |
Image interpolation, or image morphing, refers to a visual transition between
two (or more) input images. For such a transition to look visually appealing,
its desirable properties are (i) to be smooth; (ii) to apply the minimal
required change in the image; and (iii) to seem "real", avoiding unnatural
artifacts in each image in the transition. To obtain a smooth and
straightforward transition, one may adopt the well-known Wasserstein Barycenter
Problem (WBP). While this approach guarantees minimal changes under the
Wasserstein metric, the resulting images might seem unnatural. In this work, we
propose a novel approach for image morphing that possesses all three desired
properties. To this end, we define a constrained variant of the WBP that
enforces the intermediate images to satisfy an image prior. We describe an
algorithm that solves this problem and demonstrate it using the sparse prior
and generative adversarial networks. | [
"cs.CV"
] |
Graph pooling that summaries the information in a large graph into a compact
form is essential in hierarchical graph representation learning. Existing graph
pooling methods either suffer from high computational complexity or cannot
capture the global dependencies between graphs before and after pooling. To
address the problems of existing graph pooling methods, we propose Coarsened
Graph Infomax Pooling (CGIPool) that maximizes the mutual information between
the input and the coarsened graph of each pooling layer to preserve graph-level
dependencies. To achieve mutual information neural maximization, we apply
contrastive learning and propose a self-attention-based algorithm for learning
positive and negative samples. Extensive experimental results on seven datasets
illustrate the superiority of CGIPool comparing to the state-of-the-art
methods. | [
"cs.LG",
"cs.AI"
] |
In this paper the problem of forecasting high dimensional time series is
considered. Such time series can be modeled as matrices where each column
denotes a measurement. In addition, when missing values are present, low rank
matrix factorization approaches are suitable for predicting future values. This
paper formally defines and analyzes the forecasting problem in the online
setting, i.e. where the data arrives as a stream and only a single pass is
allowed. We present and analyze novel matrix factorization techniques which can
learn low-dimensional embeddings effectively in an online manner. Based on
these embeddings a recursive minimum mean square error estimator is derived,
which learns an autoregressive model on them. Experiments with two real
datasets with tens of millions of measurements show the benefits of the
proposed approach. | [
"cs.LG"
] |
Generative adversarial networks (GANs) are the state of the art in generative
modeling. Unfortunately, most GAN methods are susceptible to mode collapse,
meaning that they tend to capture only a subset of the modes of the true
distribution. A possible way of dealing with this problem is to use an ensemble
of GANs, where (ideally) each network models a single mode. In this paper, we
introduce a principled method for training an ensemble of GANs using
semi-discrete optimal transport theory. In our approach, each generative
network models the transportation map between a point mass (Dirac measure) and
the restriction of the data distribution on a tile of a Voronoi tessellation
that is defined by the location of the point masses. We iteratively train the
generative networks and the point masses until convergence. The resulting
k-GANs algorithm has strong theoretical connection with the k-medoids
algorithm. In our experiments, we show that our ensemble method consistently
outperforms baseline GANs. | [
"stat.ML",
"cs.LG"
] |
Deploying deep neural networks on mobile devices is a challenging task.
Current model compression methods such as matrix decomposition effectively
reduce the deployed model size, but still cannot satisfy real-time processing
requirement. This paper first discovers that the major obstacle is the
excessive execution time of non-tensor layers such as pooling and normalization
without tensor-like trainable parameters. This motivates us to design a novel
acceleration framework: DeepRebirth through "slimming" existing consecutive and
parallel non-tensor and tensor layers. The layer slimming is executed at
different substructures: (a) streamline slimming by merging the consecutive
non-tensor and tensor layer vertically; (b) branch slimming by merging
non-tensor and tensor branches horizontally. The proposed optimization
operations significantly accelerate the model execution and also greatly reduce
the run-time memory cost since the slimmed model architecture contains less
hidden layers. To maximally avoid accuracy loss, the parameters in new
generated layers are learned with layer-wise fine-tuning based on both
theoretical analysis and empirical verification. As observed in the experiment,
DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on
GoogLeNet with only 0.4% drop of top-5 accuracy on ImageNet. Furthermore, by
combining with other model compression techniques, DeepRebirth offers an
average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5% top-5
accuracy, 14% faster than SqueezeNet which only has a top-5 accuracy of 80.5%. | [
"cs.CV"
] |
Many few-shot learning models focus on recognising images. In contrast, we
tackle a challenging task of few-shot action recognition from videos. We build
on a C3D encoder for spatio-temporal video blocks to capture short-range action
patterns. Such encoded blocks are aggregated by permutation-invariant pooling
to make our approach robust to varying action lengths and long-range temporal
dependencies whose patterns are unlikely to repeat even in clips of the same
class. Subsequently, the pooled representations are combined into simple
relation descriptors which encode so-called query and support clips. Finally,
relation descriptors are fed to the comparator with the goal of similarity
learning between query and support clips. Importantly, to re-weight block
contributions during pooling, we exploit spatial and temporal attention modules
and self-supervision. In naturalistic clips (of the same class) there exists a
temporal distribution shift--the locations of discriminative temporal action
hotspots vary. Thus, we permute blocks of a clip and align the resulting
attention regions with similarly permuted attention regions of non-permuted
clip to train the attention mechanism invariant to block (and thus long-term
hotspot) permutations. Our method outperforms the state of the art on the
HMDB51, UCF101, miniMIT datasets. | [
"cs.CV"
] |
In this paper, we introduce Cirrus, a new long-range bi-pattern LiDAR public
dataset for autonomous driving tasks such as 3D object detection, critical to
highway driving and timely decision making. Our platform is equipped with a
high-resolution video camera and a pair of LiDAR sensors with a 250-meter
effective range, which is significantly longer than existing public datasets.
We record paired point clouds simultaneously using both Gaussian and uniform
scanning patterns. Point density varies significantly across such a long range,
and different scanning patterns further diversify object representation in
LiDAR. In Cirrus, eight categories of objects are exhaustively annotated in the
LiDAR point clouds for the entire effective range. To illustrate the kind of
studies supported by this new dataset, we introduce LiDAR model adaptation
across different ranges, scanning patterns, and sensor devices. Promising
results show the great potential of this new dataset to the robotics and
computer vision communities. | [
"cs.CV"
] |
The dominant paradigm in spatiotemporal action detection is to classify
actions using spatiotemporal features learned by 2D or 3D Convolutional
Networks. We argue that several actions are characterized by their context,
such as relevant objects and actors present in the video. To this end, we
introduce an architecture based on self-attention and Graph Convolutional
Networks in order to model contextual cues, such as actor-actor and
actor-object interactions, to improve human action detection in video. We are
interested in achieving this in a weakly-supervised setting, i.e. using as less
annotations as possible in terms of action bounding boxes. Our model aids
explainability by visualizing the learned context as an attention map, even for
actions and objects unseen during training. We evaluate how well our model
highlights the relevant context by introducing a quantitative metric based on
recall of objects retrieved by attention maps. Our model relies on a 3D
convolutional RGB stream, and does not require expensive optical flow
computation. We evaluate our models on the DALY dataset, which consists of
human-object interaction actions. Experimental results show that our
contextualized approach outperforms a baseline action detection approach by
more than 2 points in Video-mAP. Code is available at
\url{https://github.com/micts/acgcn} | [
"cs.LG",
"cs.CV"
] |
Astounding results from Transformer models on natural language tasks have
intrigued the vision community to study their application to computer vision
problems. Among their salient benefits, Transformers enable modeling long
dependencies between input sequence elements and support parallel processing of
sequence as compared to recurrent networks e.g., Long short-term memory (LSTM).
Different from convolutional networks, Transformers require minimal inductive
biases for their design and are naturally suited as set-functions. Furthermore,
the straightforward design of Transformers allows processing multiple
modalities (e.g., images, videos, text and speech) using similar processing
blocks and demonstrates excellent scalability to very large capacity networks
and huge datasets. These strengths have led to exciting progress on a number of
vision tasks using Transformer networks. This survey aims to provide a
comprehensive overview of the Transformer models in the computer vision
discipline. We start with an introduction to fundamental concepts behind the
success of Transformers i.e., self-attention, large-scale pre-training, and
bidirectional encoding. We then cover extensive applications of transformers in
vision including popular recognition tasks (e.g., image classification, object
detection, action recognition, and segmentation), generative modeling,
multi-modal tasks (e.g., visual-question answering, visual reasoning, and
visual grounding), video processing (e.g., activity recognition, video
forecasting), low-level vision (e.g., image super-resolution, image
enhancement, and colorization) and 3D analysis (e.g., point cloud
classification and segmentation). We compare the respective advantages and
limitations of popular techniques both in terms of architectural design and
their experimental value. Finally, we provide an analysis on open research
directions and possible future works. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The existed methods for electroencephalograph (EEG) emotion recognition
always train the models based on all the EEG samples indistinguishably.
However, some of the source (training) samples may lead to a negative influence
because they are significant dissimilar with the target (test) samples. So it
is necessary to give more attention to the EEG samples with strong
transferability rather than forcefully training a classification model by all
the samples. Furthermore, for an EEG sample, from the aspect of neuroscience,
not all the brain regions of an EEG sample contains emotional information that
can transferred to the test data effectively. Even some brain region data will
make strong negative effect for learning the emotional classification model.
Considering these two issues, in this paper, we propose a transferable
attention neural network (TANN) for EEG emotion recognition, which learns the
emotional discriminative information by highlighting the transferable EEG brain
regions data and samples adaptively through local and global attention
mechanism. This can be implemented by measuring the outputs of multiple
brain-region-level discriminators and one single sample-level discriminator. We
conduct the extensive experiments on three public EEG emotional datasets. The
results validate that the proposed model achieves the state-of-the-art
performance. | [
"cs.CV",
"cs.HC"
] |
In this paper, we propose PointRCNN for 3D object detection from raw point
cloud. The whole framework is composed of two stages: stage-1 for the bottom-up
3D proposal generation and stage-2 for refining proposals in the canonical
coordinates to obtain the final detection results. Instead of generating
proposals from RGB image or projecting point cloud to bird's view or voxels as
previous methods do, our stage-1 sub-network directly generates a small number
of high-quality 3D proposals from point cloud in a bottom-up manner via
segmenting the point cloud of the whole scene into foreground points and
background. The stage-2 sub-network transforms the pooled points of each
proposal to canonical coordinates to learn better local spatial features, which
is combined with global semantic features of each point learned in stage-1 for
accurate box refinement and confidence prediction. Extensive experiments on the
3D detection benchmark of KITTI dataset show that our proposed architecture
outperforms state-of-the-art methods with remarkable margins by using only
point cloud as input. The code is available at
https://github.com/sshaoshuai/PointRCNN. | [
"cs.CV"
] |
Recent years have witnessed the unprecedented success of deep convolutional
neural networks (CNNs) in single image super-resolution (SISR). However,
existing CNN-based SISR methods mostly assume that a low-resolution (LR) image
is bicubicly downsampled from a high-resolution (HR) image, thus inevitably
giving rise to poor performance when the true degradation does not follow this
assumption. Moreover, they lack scalability in learning a single model to
non-blindly deal with multiple degradations. To address these issues, we
propose a general framework with dimensionality stretching strategy that
enables a single convolutional super-resolution network to take two key factors
of the SISR degradation process, i.e., blur kernel and noise level, as input.
Consequently, the super-resolver can handle multiple and even spatially variant
degradations, which significantly improves the practicability. Extensive
experimental results on synthetic and real LR images show that the proposed
convolutional super-resolution network not only can produce favorable results
on multiple degradations but also is computationally efficient, providing a
highly effective and scalable solution to practical SISR applications. | [
"cs.CV"
] |
Existing Multiple-Object Tracking (MOT) methods either follow the
tracking-by-detection paradigm to conduct object detection, feature extraction
and data association separately, or have two of the three subtasks integrated
to form a partially end-to-end solution. Going beyond these sub-optimal
frameworks, we propose a simple online model named Chained-Tracker (CTracker),
which naturally integrates all the three subtasks into an end-to-end solution
(the first as far as we know). It chains paired bounding boxes regression
results estimated from overlapping nodes, of which each node covers two
adjacent frames. The paired regression is made attentive by object-attention
(brought by a detection module) and identity-attention (ensured by an ID
verification module). The two major novelties: chained structure and paired
attentive regression, make CTracker simple, fast and effective, setting new
MOTA records on MOT16 and MOT17 challenge datasets (67.6 and 66.6,
respectively), without relying on any extra training data. The source code of
CTracker can be found at: github.com/pjl1995/CTracker. | [
"cs.CV"
] |
We present DietNeRF, a 3D neural scene representation estimated from a few
images. Neural Radiance Fields (NeRF) learn a continuous volumetric
representation of a scene through multi-view consistency, and can be rendered
from novel viewpoints by ray casting. While NeRF has an impressive ability to
reconstruct geometry and fine details given many images, up to 100 for
challenging 360{\deg} scenes, it often finds a degenerate solution to its image
reconstruction objective when only a few input views are available. To improve
few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic
consistency loss that encourages realistic renderings at novel poses. DietNeRF
is trained on individual scenes to (1) correctly render given input views from
the same pose, and (2) match high-level semantic attributes across different,
random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary
poses. We extract these semantics using a pre-trained visual encoder such as
CLIP, a Vision Transformer trained on hundreds of millions of diverse
single-view, 2D photographs mined from the web with natural language
supervision. In experiments, DietNeRF improves the perceptual quality of
few-shot view synthesis when learned from scratch, can render novel views with
as few as one observed image when pre-trained on a multi-view dataset, and
produces plausible completions of completely unobserved regions. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG"
] |
The common pipeline in autonomous driving systems is highly modular and
includes a perception component which extracts lists of surrounding objects and
passes these lists to a high-level decision component. In this case, leveraging
the benefits of deep reinforcement learning for high-level decision making
requires special architectures to deal with multiple variable-length sequences
of different object types, such as vehicles, lanes or traffic signs. At the
same time, the architecture has to be able to cover interactions between
traffic participants in order to find the optimal action to be taken. In this
work, we propose the novel Deep Scenes architecture, that can learn complex
interaction-aware scene representations based on extensions of either 1) Deep
Sets or 2) Graph Convolutional Networks. We present the Graph-Q and DeepScene-Q
off-policy reinforcement learning algorithms, both outperforming
state-of-the-art methods in evaluations with the publicly available traffic
simulator SUMO. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Existing research for image text retrieval mainly relies on sentence-level
supervision to distinguish matched and mismatched sentences for a query image.
However, semantic mismatch between an image and sentences usually happens in
finer grain, i.e., phrase level. In this paper, we explore to introduce
additional phrase-level supervision for the better identification of mismatched
units in the text. In practice, multi-grained semantic labels are automatically
constructed for a query image in both sentence-level and phrase-level. We
construct text scene graphs for the matched sentences and extract entities and
triples as the phrase-level labels. In order to integrate both supervision of
sentence-level and phrase-level, we propose Semantic Structure Aware Multimodal
Transformer (SSAMT) for multi-modal representation learning. Inside the SSAMT,
we utilize different kinds of attention mechanisms to enforce interactions of
multi-grain semantic units in both sides of vision and language. For the
training, we propose multi-scale matching losses from both global and local
perspectives, and penalize mismatched phrases. Experimental results on MS-COCO
and Flickr30K show the effectiveness of our approach compared to some
state-of-the-art models. | [
"cs.CV",
"cs.CL"
] |
Generative Adversarial Networks (GAN) receive great attentions recently due
to its excellent performance in image generation, transformation, and
super-resolution. However, GAN has rarely been studied and trained for
classification, leading that the generated images may not be appropriate for
classification. In this paper, we propose a novel Generative Adversarial
Classifier (GAC) particularly for low-resolution Handwriting Character
Recognition. Specifically, involving additionally a classifier in the training
process of normal GANs, GAC is calibrated for learning suitable structures and
restored characters images that benefits the classification. Experimental
results show that our proposed method can achieve remarkable performance in
handwriting characters 8x super-resolution, approximately 10% and 20% higher
than the present state-of-the-art methods respectively on benchmark data
CASIA-HWDB1.1 and MNIST. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We consider the task of learning control policies for a robotic mechanism
striking a puck in an air hockey game. The control signal is a direct command
to the robot's motors. We employ a model free deep reinforcement learning
framework to learn the motoric skills of striking the puck accurately in order
to score. We propose certain improvements to the standard learning scheme which
make the deep Q-learning algorithm feasible when it might otherwise fail. Our
improvements include integrating prior knowledge into the learning scheme, and
accounting for the changing distribution of samples in the experience replay
buffer. Finally we present our simulation results for aimed striking which
demonstrate the successful learning of this task, and the improvement in
algorithm stability due to the proposed modifications. | [
"cs.LG",
"cs.RO"
] |
Recently, many methods have been proposed for object detection. They cannot
detect objects by semantic features, adaptively. In this work, according to
channel and spatial attention mechanisms, we mainly analyze that different
methods detect objects adaptively. Some state-of-the-art detectors combine
different feature pyramids with many mechanisms to enhance multi-level semantic
information. However, they require more cost. This work addresses that by an
anchor-free detector with shared encoder-decoder with attention mechanism,
extracting shared features. We consider features of different levels from
backbone (e.g., ResNet-50) as the basis features. Then, we feed the features
into a simple module, followed by a detector header to detect objects.
Meantime, we use the semantic features to revise geometric locations, and the
detector is a pixel-semantic revising of position. More importantly, this work
analyzes the impact of different pooling strategies (e.g., mean, maximum or
minimum) on multi-scale objects, and finds the minimum pooling improve
detection performance on small objects better. Compared with state-of-the-art
MNC based on ResNet-101 for the standard MSCOCO 2014 baseline, our method
improves detection AP of 3.8%. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Image segmentation techniques are predominately based on parameter-laden
optimization. The objective function typically involves weights for balancing
competing image fidelity and segmentation regularization cost terms. Setting
these weights suitably has been a painstaking, empirical process. Even if such
ideal weights are found for a novel image, most current approaches fix the
weight across the whole image domain, ignoring the spatially-varying properties
of object shape and image appearance. We propose a novel technique that
autonomously balances these terms in a spatially-adaptive manner through the
incorporation of image reliability in a graph-based segmentation framework. We
validate on synthetic data achieving a reduction in mean error of 47% (p-value
<< 0.05) when compared to the best fixed parameter segmentation. We also
present results on medical images (including segmentations of the corpus
callosum and brain tissue in MRI data) and on natural images. | [
"cs.CV",
"I.4.6"
] |
Background: Patients with neovascular age-related macular degeneration (AMD)
can avoid vision loss via certain therapy. However, methods to predict the
progression to neovascular age-related macular degeneration (nvAMD) are
lacking. Purpose: To develop and validate a deep learning (DL) algorithm to
predict 1-year progression of eyes with no, early, or intermediate AMD to
nvAMD, using color fundus photographs (CFP). Design: Development and validation
of a DL algorithm. Methods: We trained a DL algorithm to predict 1-year
progression to nvAMD, and used 10-fold cross-validation to evaluate this
approach on two groups of eyes in the Age-Related Eye Disease Study (AREDS):
none/early/intermediate AMD, and intermediate AMD (iAMD) only. We compared the
DL algorithm to the manually graded 4-category and 9-step scales in the AREDS
dataset. Main outcome measures: Performance of the DL algorithm was evaluated
using the sensitivity at 80% specificity for progression to nvAMD. Results: The
DL algorithm's sensitivity for predicting progression to nvAMD from
none/early/iAMD (78+/-6%) was higher than manual grades from the 9-step scale
(67+/-8%) or the 4-category scale (48+/-3%). For predicting progression
specifically from iAMD, the DL algorithm's sensitivity (57+/-6%) was also
higher compared to the 9-step grades (36+/-8%) and the 4-category grades
(20+/-0%). Conclusions: Our DL algorithm performed better in predicting
progression to nvAMD than manual grades. Future investigations are required to
test the application of this DL algorithm in a real-world clinical setting. | [
"cs.CV"
] |
Colonoscopy is the tool of choice for preventing Colorectal Cancer, by
detecting and removing polyps before they become cancerous. However,
colonoscopy is hampered by the fact that endoscopists routinely miss 22-28% of
polyps. While some of these missed polyps appear in the endoscopist's field of
view, others are missed simply because of substandard coverage of the
procedure, i.e. not all of the colon is seen. This paper attempts to rectify
the problem of substandard coverage in colonoscopy through the introduction of
the C2D2 (Colonoscopy Coverage Deficiency via Depth) algorithm which detects
deficient coverage, and can thereby alert the endoscopist to revisit a given
area. More specifically, C2D2 consists of two separate algorithms: the first
performs depth estimation of the colon given an ordinary RGB video stream;
while the second computes coverage given these depth estimates. Rather than
compute coverage for the entire colon, our algorithm computes coverage locally,
on a segment-by-segment basis; C2D2 can then indicate in real-time whether a
particular area of the colon has suffered from deficient coverage, and if so
the endoscopist can return to that area. Our coverage algorithm is the first
such algorithm to be evaluated in a large-scale way; while our depth estimation
technique is the first calibration-free unsupervised method applied to
colonoscopies. The C2D2 algorithm achieves state of the art results in the
detection of deficient coverage. On synthetic sequences with ground truth, it
is 2.4 times more accurate than human experts; while on real sequences, C2D2
achieves a 93.0% agreement with experts. | [
"cs.CV"
] |
Recognizing irregular text in natural scene images is challenging due to the
large variance in text appearance, such as curvature, orientation and
distortion. Most existing approaches rely heavily on sophisticated model
designs and/or extra fine-grained annotations, which, to some extent, increase
the difficulty in algorithm implementation and data collection. In this work,
we propose an easy-to-implement strong baseline for irregular scene text
recognition, using off-the-shelf neural network components and only word-level
annotations. It is composed of a $31$-layer ResNet, an LSTM-based
encoder-decoder framework and a 2-dimensional attention module. Despite its
simplicity, the proposed method is robust and achieves state-of-the-art
performance on both regular and irregular scene text recognition benchmarks.
Code is available at: https://tinyurl.com/ShowAttendRead | [
"cs.CV"
] |
Graph neural networks (GNNs) have been successfully applied to learning
representation on graphs in many relational tasks. Recently, researchers study
neural architecture search (NAS) to reduce the dependence of human expertise
and explore better GNN architectures, but they over-emphasize entity features
and ignore latent relation information concealed in the edges. To solve this
problem, we incorporate edge features into graph search space and propose
Edge-featured Graph Neural Architecture Search to find the optimal GNN
architecture. Specifically, we design rich entity and edge updating operations
to learn high-order representations, which convey more generic message passing
mechanisms. Moreover, the architecture topology in our search space allows to
explore complex feature dependence of both entities and edges, which can be
efficiently optimized by differentiable search strategy. Experiments at three
graph tasks on six datasets show EGNAS can search better GNNs with higher
performance than current state-of-the-art human-designed and searched-based
GNNs. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Facial expression synthesis has achieved remarkable advances with the advent
of Generative Adversarial Networks (GANs). However, GAN-based approaches mostly
generate photo-realistic results as long as the testing data distribution is
close to the training data distribution. The quality of GAN results
significantly degrades when testing images are from a slightly different
distribution. Moreover, recent work has shown that facial expressions can be
synthesized by changing localized face regions. In this work, we propose a
pixel-based facial expression synthesis method in which each output pixel
observes only one input pixel. The proposed method achieves good generalization
capability by leveraging only a few hundred training images. Experimental
results demonstrate that the proposed method performs comparably well against
state-of-the-art GANs on in-dataset images and significantly better on
out-of-dataset images. In addition, the proposed model is two orders of
magnitude smaller which makes it suitable for deployment on
resource-constrained devices. | [
"cs.CV"
] |
The role of robots in society keeps expanding, bringing with it the necessity
of interacting and communicating with humans. In order to keep such interaction
intuitive, we provide automatic wayfinding based on verbal navigational
instructions. Our first contribution is the creation of a large-scale dataset
with verbal navigation instructions. To this end, we have developed an
interactive visual navigation environment based on Google Street View; we
further design an annotation method to highlight mined anchor landmarks and
local directions between them in order to help annotators formulate typical,
human references to those. The annotation task was crowdsourced on the AMT
platform, to construct a new Talk2Nav dataset with $10,714$ routes. Our second
contribution is a new learning method. Inspired by spatial cognition research
on the mental conceptualization of navigational instructions, we introduce a
soft dual attention mechanism defined over the segmented language instructions
to jointly extract two partial instructions -- one for matching the next
upcoming visual landmark and the other for matching the local directions to the
next landmark. On the similar lines, we also introduce spatial memory scheme to
encode the local directional transitions. Our work takes advantage of the
advance in two lines of research: mental formalization of verbal navigational
instructions and training neural network agents for automatic way finding.
Extensive experiments show that our method significantly outperforms previous
navigation methods. For demo video, dataset and code, please refer to our
project page: https://www.trace.ethz.ch/publications/2019/talk2nav/index.html | [
"cs.CV",
"cs.CL",
"cs.RO"
] |
Image guided depth completion is the task of generating a dense depth map
from a sparse depth map and a high quality image. In this task, how to fuse the
color and depth modalities plays an important role in achieving good
performance. This paper proposes a two-branch backbone that consists of a
color-dominant branch and a depth-dominant branch to exploit and fuse two
modalities thoroughly. More specifically, one branch inputs a color image and a
sparse depth map to predict a dense depth map. The other branch takes as inputs
the sparse depth map and the previously predicted depth map, and outputs a
dense depth map as well. The depth maps predicted from two branches are
complimentary to each other and therefore they are adaptively fused. In
addition, we also propose a simple geometric convolutional layer to encode 3D
geometric cues. The geometric encoded backbone conducts the fusion of different
modalities at multiple stages, leading to good depth completion results. We
further implement a dilated and accelerated CSPN++ to refine the fused depth
map efficiently. The proposed full model ranks 1st in the KITTI depth
completion online leaderboard at the time of submission. It also infers much
faster than most of the top ranked methods. The code of this work is available
at https://github.com/JUGGHM/PENet_ICRA2021. | [
"cs.CV"
] |
Although supervised deep representation learning has attracted enormous
attentions across areas of pattern recognition and computer vision, little
progress has been made towards unsupervised deep representation learning for
image clustering. In this paper, we propose a deep spectral analysis network
for unsupervised representation learning and image clustering. While spectral
analysis is established with solid theoretical foundations and has been widely
applied to unsupervised data mining, its essential weakness lies in the fact
that it is difficult to construct a proper affinity matrix and determine the
involving Laplacian matrix for a given dataset. In this paper, we propose a
SA-Net to overcome these weaknesses and achieve improved image clustering by
extending the spectral analysis procedure into a deep learning framework with
multiple layers. The SA-Net has the capability to learn deep representations
and reveal deep correlations among data samples. Compared with the existing
spectral analysis, the SA-Net achieves two advantages: (i) Given the fact that
one spectral analysis procedure can only deal with one subset of the given
dataset, our proposed SA-Net elegantly integrates multiple parallel and
consecutive spectral analysis procedures together to enable interactive
learning across different units towards a coordinated clustering model; (ii)
Our SA-Net can identify the local similarities among different images at patch
level and hence achieves a higher level of robustness against occlusions.
Extensive experiments on a number of popular datasets support that our proposed
SA-Net outperforms 11 benchmarks across a number of image clustering
applications. | [
"cs.CV"
] |
Exposure bias refers to the train-test discrepancy that seemingly arises when
an autoregressive generative model uses only ground-truth contexts at training
time but generated ones at test time. We separate the contributions of the
model and the learning framework to clarify the debate on consequences and
review proposed counter-measures. In this light, we argue that generalization
is the underlying property to address and propose unconditional generation as
its fundamental benchmark. Finally, we combine latent variable modeling with a
recent formulation of exploration in reinforcement learning to obtain a
rigorous handling of true and generated contexts. Results on language modeling
and variational sentence auto-encoding confirm the model's generalization
capability. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Graph neural network (GNN) has recently been established as an effective
representation learning framework on graph data. However, the popular message
passing models rely on local permutation invariant aggregate functions, which
gives rise to the concerns about their representational power. Here, we
introduce the concept of automorphic equivalence to theoretically analyze GNN's
expressiveness in differentiating node's structural role. We show that the
existing message passing GNNs have limitations in learning expressive
representations. Moreover, we design a novel GNN class that leverages learnable
automorphic equivalence filters to explicitly differentiate the structural
roles of each node's neighbors, and uses a squeeze-and-excitation module to
fuse various structural information. We theoretically prove that the proposed
model is expressive in terms of generating distinct representations for nodes
with different structural feature. Besides, we empirically validate our model
on eight real-world graph data, including social network, e-commerce
co-purchase network and citation network, and show that it consistently
outperforms strong baselines. | [
"cs.LG",
"cs.AI"
] |
We present SMURF, a method for unsupervised learning of optical flow that
improves state of the art on all benchmarks by $36\%$ to $40\%$ (over the prior
best method UFlow) and even outperforms several supervised approaches such as
PWC-Net and FlowNet2. Our method integrates architecture improvements from
supervised optical flow, i.e. the RAFT model, with new ideas for unsupervised
learning that include a sequence-aware self-supervision loss, a technique for
handling out-of-frame motion, and an approach for learning effectively from
multi-frame video data while still only requiring two frames for inference. | [
"cs.CV"
] |
Mixture models are a fundamental tool in applied statistics and machine
learning for treating data taken from multiple subpopulations. The current
practice for estimating the parameters of such models relies on local search
heuristics (e.g., the EM algorithm) which are prone to failure, and existing
consistent methods are unfavorable due to their high computational and sample
complexity which typically scale exponentially with the number of mixture
components. This work develops an efficient method of moments approach to
parameter estimation for a broad class of high-dimensional mixture models with
many components, including multi-view mixtures of Gaussians (such as mixtures
of axis-aligned Gaussians) and hidden Markov models. The new method leads to
rigorous unsupervised learning results for mixture models that were not
achieved by previous works; and, because of its simplicity, it offers a viable
alternative to EM for practical deployment. | [
"cs.LG",
"stat.ML"
] |
The use of deep learning for human identification and object detection is
becoming ever more prevalent in the surveillance industry. These systems have
been trained to identify human body's or faces with a high degree of accuracy.
However, there have been successful attempts to fool these systems with
different techniques called adversarial attacks. This paper presents a final
report for an adversarial attack using visible light on facial recognition
systems. The relevance of this research is to exploit the physical downfalls of
deep neural networks. This demonstration of weakness within these systems are
in hopes that this research will be used in the future to improve the training
models for object recognition. As results were gathered the project objectives
were adjusted to fit the outcomes. Because of this the following paper
initially explores an adversarial attack using infrared light before
readjusting to a visible light attack. A research outline on infrared light and
facial recognition are presented within. A detailed analyzation of the current
findings and possible future recommendations of the project are presented. The
challenges encountered are evaluated and a final solution is delivered. The
projects final outcome exhibits the ability to effectively fool recognition
systems using light. | [
"cs.CV"
] |
Vision transformer has demonstrated promising performance on challenging
computer vision tasks. However, directly training the vision transformers may
yield unstable and sub-optimal results. Recent works propose to improve the
performance of the vision transformers by modifying the transformer structures,
e.g., incorporating convolution layers. In contrast, we investigate an
orthogonal approach to stabilize the vision transformer training without
modifying the networks. We observe the instability of the training can be
attributed to the significant similarity across the extracted patch
representations. More specifically, for deep vision transformers, the
self-attention blocks tend to map different patches into similar latent
representations, yielding information loss and performance degradation. To
alleviate this problem, in this work, we introduce novel loss functions in
vision transformer training to explicitly encourage diversity across patch
representations for more discriminative feature extraction. We empirically show
that our proposed techniques stabilize the training and allow us to train wider
and deeper vision transformers. We further show the diversified features
significantly benefit the downstream tasks in transfer learning. For semantic
segmentation, we enhance the state-of-the-art (SOTA) results on Cityscapes and
ADE20k. Our code is available at
https://github.com/ChengyueGongR/PatchVisionTransformer. | [
"cs.CV",
"cs.LG"
] |
Research in deep learning models to forecast traffic intensities has gained
great attention in recent years due to their capability to capture the complex
spatio-temporal relationships within the traffic data. However, most
state-of-the-art approaches have designed spatial-only (e.g. Graph Neural
Networks) and temporal-only (e.g. Recurrent Neural Networks) modules to
separately extract spatial and temporal features. However, we argue that it is
less effective to extract the complex spatio-temporal relationship with such
factorized modules. Besides, most existing works predict the traffic intensity
of a particular time interval only based on the traffic data of the previous
one hour of that day. And thereby ignores the repetitive daily/weekly pattern
that may exist in the last hour of data. Therefore, we propose a Unified
Spatio-Temporal Graph Convolution Network (USTGCN) for traffic forecasting that
performs both spatial and temporal aggregation through direct information
propagation across different timestamp nodes with the help of spectral graph
convolution on a spatio-temporal graph. Furthermore, it captures historical
daily patterns in previous days and current-day patterns in current-day traffic
data. Finally, we validate our work's effectiveness through experimental
analysis, which shows that our model USTGCN can outperform state-of-the-art
performances in three popular benchmark datasets from the Performance
Measurement System (PeMS). Moreover, the training time is reduced significantly
with our proposed USTGCN model. | [
"cs.LG"
] |
High quality standard cell layout automation in advanced technology nodes is
still challenging in the industry today because of complex design rules. In
this paper we introduce an automatic standard cell layout generator called
NVCell that can generate layouts with equal or smaller area for over 90% of
single row cells in an industry standard cell library on an advanced technology
node. NVCell leverages reinforcement learning (RL) to fix design rule
violations during routing and to generate efficient placements. | [
"cs.LG"
] |
In recent years, deep learning-based visual object trackers have achieved
state-of-the-art performance on several visual object tracking benchmarks.
However, most tracking benchmarks are focused on ground level videos, whereas
aerial tracking presents a new set of challenges. In this paper, we compare ten
trackers based on deep learning techniques on four aerial datasets. We choose
top performing trackers utilizing different approaches, specifically tracking
by detection, discriminative correlation filters, Siamese networks and
reinforcement learning. In our experiments, we use a subset of OTB2015 dataset
with aerial style videos; the UAV123 dataset without synthetic sequences; the
UAV20L dataset, which contains 20 long sequences; and DTB70 dataset as our
benchmark datasets. We compare the advantages and disadvantages of different
trackers in different tracking situations encountered in aerial data. Our
findings indicate that the trackers perform significantly worse in aerial
datasets compared to standard ground level videos. We attribute this effect to
smaller target size, camera motion, significant camera rotation with respect to
the target, out of view movement, and clutter in the form of occlusions or
similar looking distractors near tracked object. | [
"cs.CV",
"I.4; I.5"
] |
In the scenario of real-time monitoring of hospital patients, high-quality
inference of patients' health status using all information available from
clinical covariates and lab tests is essential to enable successful medical
interventions and improve patient outcomes. Developing a computational
framework that can learn from observational large-scale electronic health
records (EHRs) and make accurate real-time predictions is a critical step. In
this work, we develop and explore a Bayesian nonparametric model based on
Gaussian process (GP) regression for hospital patient monitoring. We propose
MedGP, a statistical framework that incorporates 24 clinical and lab covariates
and supports a rich reference data set from which relationships between
observed covariates may be inferred and exploited for high-quality inference of
patient state over time. To do this, we develop a highly structured sparse GP
kernel to enable tractable computation over tens of thousands of time points
while estimating correlations among clinical covariates, patients, and
periodicity in patient observations. MedGP has a number of benefits over
current methods, including (i) not requiring an alignment of the time series
data, (ii) quantifying confidence regions in the predictions, (iii) exploiting
a vast and rich database of patients, and (iv) inferring interpretable
relationships among clinical covariates. We evaluate and compare results from
MedGP on the task of online prediction for three patient subgroups from two
medical data sets across 8,043 patients. We found MedGP improves online
prediction over baseline methods for nearly all covariates across different
disease subgroups and studies. The publicly available code is at
https://github.com/bee-hive/MedGP. | [
"stat.ML"
] |
Biphasic facial age translation aims at predicting the appearance of the
input face at any age. Facial age translation has received considerable
research attention in the last decade due to its practical value in cross-age
face recognition and various entertainment applications. However, most existing
methods model age changes between holistic images, regardless of the human face
structure and the age-changing patterns of individual facial components.
Consequently, the lack of semantic supervision will cause infidelity of
generated faces in detail. To this end, we propose a unified framework for
biphasic facial age translation with noisy-semantic guided generative
adversarial networks. Structurally, we project the class-aware noisy semantic
layouts to soft latent maps for the following injection operation on the
individual facial parts. In particular, we introduce two sub-networks,
ProjectionNet and ConstraintNet. ProjectionNet introduces the low-level
structural semantic information with noise map and produces soft latent maps.
ConstraintNet disentangles the high-level spatial features to constrain the
soft latent maps, which endows more age-related context into the soft latent
maps. Specifically, attention mechanism is employed in ConstraintNet for
feature disentanglement. Meanwhile, in order to mine the strongest mapping
ability of the network, we embed two types of learning strategies in the
training procedure, supervised self-driven generation and unsupervised
condition-driven cycle-consistent generation. As a result, extensive
experiments conducted on MORPH and CACD datasets demonstrate the prominent
ability of our proposed method which achieves state-of-the-art performance. | [
"cs.CV",
"cs.AI"
] |
Foot is a vital part of human, and lots of valuable information is embedded.
Plantar pressure is one of which contains this information and it describes
human walking features. It is proved that once one has trouble with lower limb,
the distribution of plantar pressure will change to some degree. Plantar
pressure can be converted into images according to some simple standards. In
this paper, we take full advantage of these plantar pressure images for medical
usage. We present N2RPP, a generative adversarial network (GAN) based method to
rebuild plantar pressure images of anterior cruciate ligament deficiency (ACLD)
patients from low dimension features, which are extracted from an autoencoder.
Through the result of experiments, the extracted features are a useful
representation to describe and rebuild plantar pressure images. According to
N2RPP's results, we find out that there are several noteworthy differences
between normal people and patients. This can provide doctors a rough direction
of adjusting plantar pressure to a better distribution to reduce patients' sore
and pain during the rehabilitation treatment for ACLD. | [
"cs.CV"
] |
Deep generative models for graph-structured data offer a new angle on the
problem of chemical synthesis: by optimizing differentiable models that
directly generate molecular graphs, it is possible to side-step expensive
search procedures in the discrete and vast space of chemical structures. We
introduce MolGAN, an implicit, likelihood-free generative model for small
molecular graphs that circumvents the need for expensive graph matching
procedures or node ordering heuristics of previous likelihood-based methods.
Our method adapts generative adversarial networks (GANs) to operate directly on
graph-structured data. We combine our approach with a reinforcement learning
objective to encourage the generation of molecules with specific desired
chemical properties. In experiments on the QM9 chemical database, we
demonstrate that our model is capable of generating close to 100% valid
compounds. MolGAN compares favorably both to recent proposals that use
string-based (SMILES) representations of molecules and to a likelihood-based
method that directly generates graphs, albeit being susceptible to mode
collapse. | [
"stat.ML",
"cs.LG"
] |
Variational Auto-Encoders (VAEs) are capable of learning latent
representations for high dimensional data. However, due to the i.i.d.
assumption, VAEs only optimize the singleton variational distributions and fail
to account for the correlations between data points, which might be crucial for
learning latent representations from dataset where a priori we know
correlations exist. We propose Correlated Variational Auto-Encoders (CVAEs)
that can take the correlation structure into consideration when learning latent
representations with VAEs. CVAEs apply a prior based on the correlation
structure. To address the intractability introduced by the correlated prior, we
develop an approximation by average of a set of tractable lower bounds over all
maximal acyclic subgraphs of the undirected correlation graph. Experimental
results on matching and link prediction on public benchmark rating datasets and
spectral clustering on a synthetic dataset show the effectiveness of the
proposed method over baseline algorithms. | [
"cs.LG",
"stat.ML"
] |
Dropout is a very effective method in preventing overfitting and has become
the go-to regularizer for multi-layer neural networks in recent years.
Hierarchical mixture of experts is a hierarchically gated model that defines a
soft decision tree where leaves correspond to experts and decision nodes
correspond to gating models that softly choose between its children, and as
such, the model defines a soft hierarchical partitioning of the input space. In
this work, we propose a variant of dropout for hierarchical mixture of experts
that is faithful to the tree hierarchy defined by the model, as opposed to
having a flat, unitwise independent application of dropout as one has with
multi-layer perceptrons. We show that on a synthetic regression data and on
MNIST and CIFAR-10 datasets, our proposed dropout mechanism prevents
overfitting on trees with many levels improving generalization and providing
smoother fits. | [
"cs.LG",
"stat.ML"
] |
With the growing attention on learning-to-learn new tasks using only a few
examples, meta-learning has been widely used in numerous problems such as
few-shot classification, reinforcement learning, and domain generalization.
However, meta-learning models are prone to overfitting when there are no
sufficient training tasks for the meta-learners to generalize. Although
existing approaches such as Dropout are widely used to address the overfitting
problem, these methods are typically designed for regularizing models of a
single task in supervised training. In this paper, we introduce a simple yet
effective method to alleviate the risk of overfitting for gradient-based
meta-learning. Specifically, during the gradient-based adaptation stage, we
randomly drop the gradient in the inner-loop optimization of each parameter in
deep neural networks, such that the augmented gradients improve generalization
to new tasks. We present a general form of the proposed gradient dropout
regularization and show that this term can be sampled from either the Bernoulli
or Gaussian distribution. To validate the proposed method, we conduct extensive
experiments and analysis on numerous computer vision tasks, demonstrating that
the gradient dropout regularization mitigates the overfitting problem and
improves the performance upon various gradient-based meta-learning frameworks. | [
"cs.CV",
"cs.LG"
] |
Autoencoders and generative models produce some of the most spectacular deep
learning results to date. However, understanding and controlling the latent
space of these models presents a considerable challenge. Drawing inspiration
from principal component analysis and autoencoder, we propose the Principal
Component Analysis Autoencoder (PCAAE). This is a novel autoencoder whose
latent space verifies two properties. Firstly, the dimensions are organised in
decreasing importance with respect to the data at hand. Secondly, the
components of the latent space are statistically independent. We achieve this
by progressively increasing the latent space during training, and with a
covariance loss applied to the latent codes. The resulting autoencoder produces
a latent space which separates the intrinsic attributes of the data into
different components of the latent space, in a completely unsupervised manner.
We also describe an extension of our approach to the case of powerful,
pre-trained GANs. We show results on both synthetic examples of shapes and on a
state-of-the-art GAN. For example, we are able to separate the color shade
scale of hair and skin, pose of faces and the gender in the CelebA, without
accessing any labels. We compare the PCAAE with other state-of-the-art
approaches, in particular with respect to the ability to disentangle attributes
in the latent space. We hope that this approach will contribute to better
understanding of the intrinsic latent spaces of powerful deep generative
models. | [
"cs.CV"
] |
When it comes to complex machine learning models, commonly referred to as
black boxes, understanding the underlying decision making process is crucial
for domains such as healthcare and financial services, and also when it is used
in connection with safety critical systems such as autonomous vehicles. As such
interest in explainable artificial intelligence (xAI) tools and techniques has
increased in recent years. However, the effectiveness of existing xAI
frameworks, especially concerning algorithms that work with data as opposed to
images, is still an open research question. In order to address this gap, in
this paper we examine the effectiveness of the Local Interpretable
Model-Agnostic Explanations (LIME) xAI framework, one of the most popular model
agnostic frameworks found in the literature, with a specific focus on its
performance in terms of making tabular models more interpretable. In
particular, we apply several state of the art machine learning algorithms on a
tabular dataset, and demonstrate how LIME can be used to supplement
conventional performance assessment methods. In addition, we evaluate the
understandability of the output produced by LIME both via a usability study,
involving participants who are not familiar with LIME, and its overall
usability via an assessment framework, which is derived from the International
Organisation for Standardisation 9241-11:1998 standard. | [
"cs.LG"
] |
Transport-based techniques for signal and data analysis have received
increased attention recently. Given their abilities to provide accurate
generative models for signal intensities and other data distributions, they
have been used in a variety of applications including content-based retrieval,
cancer detection, image super-resolution, and statistical machine learning, to
name a few, and shown to produce state of the art in several applications.
Moreover, the geometric characteristics of transport-related metrics have
inspired new kinds of algorithms for interpreting the meaning of data
distributions. Here we provide an overview of the mathematical underpinnings of
mass transport-related methods, including numerical implementation, as well as
a review, with demonstrations, of several applications. | [
"cs.CV"
] |
This paper proposes an out-of-sample extension framework for a global
manifold learning algorithm (Isomap) that uses temporal information in
out-of-sample points in order to make the embedding more robust to noise and
artifacts. Given a set of noise-free training data and its embedding, the
proposed framework extends the embedding for a noisy time series. This is
achieved by adding a spatio-temporal compactness term to the optimization
objective of the embedding. To the best of our knowledge, this is the first
method for out-of-sample extension of manifold embeddings that leverages timing
information available for the extension set. Experimental results demonstrate
that our out-of-sample extension algorithm renders a more robust and accurate
embedding of sequentially ordered image data in the presence of various noise
and artifacts when compared to other timing-aware embeddings. Additionally, we
show that an out-of-sample extension framework based on the proposed algorithm
outperforms the state of the art in eye-gaze estimation. | [
"stat.ML",
"cs.CG",
"cs.CV",
"cs.LG",
"cs.NE"
] |
In reinforcement learning, domain randomisation is an increasingly popular
technique for learning more general policies that are robust to domain-shifts
at deployment. However, naively aggregating information from randomised domains
may lead to high variance in gradient estimation and unstable learning process.
To address this issue, we present a peer-to-peer online distillation strategy
for RL termed P2PDRL, where multiple workers are each assigned to a different
environment, and exchange knowledge through mutual regularisation based on
Kullback-Leibler divergence. Our experiments on continuous control tasks show
that P2PDRL enables robust learning across a wider randomisation distribution
than baselines, and more robust generalisation to new environments at testing. | [
"cs.LG"
] |
There has been an intense recent activity in embedding of very high
dimensional and nonlinear data structures, much of it in the data science and
machine learning literature. We survey this activity in four parts. In the
first part we cover nonlinear methods such as principal curves,
multidimensional scaling, local linear methods, ISOMAP, graph based methods and
kernel based methods. The second part is concerned with topological embedding
methods, in particular mapping topological properties into persistence
diagrams. Another type of data sets with a tremendous growth is very
high-dimensional network data. The task considered in part three is how to
embed such data in a vector space of moderate dimension to make the data
amenable to traditional techniques such as cluster and classification
techniques. The final part of the survey deals with embedding in
$\mathbb{R}^2$, which is visualization. Three methods are presented: $t$-SNE,
UMAP and LargeVis based on methods in parts one, two and three, respectively.
The methods are illustrated and compared on two simulated data sets; one
consisting of a triple of noisy Ranunculoid curves, and one consisting of
networks of increasing complexity and with two types of nodes. | [
"stat.ML",
"cs.LG",
"stat.ME",
"62-02, 62-07, 62H25, 62H30, 94-02, 94C15"
] |
Deep learning natural language processing models often use vector word
embeddings, such as word2vec or GloVe, to represent words. A discrete sequence
of words can be much more easily integrated with downstream neural layers if it
is represented as a sequence of continuous vectors. Also, semantic
relationships between words, learned from a text corpus, can be encoded in the
relative configurations of the embedding vectors. However, storing and
accessing embedding vectors for all words in a dictionary requires large amount
of space, and may stain systems with limited GPU memory. Here, we used
approaches inspired by quantum computing to propose two related methods, {\em
word2ket} and {\em word2ketXS}, for storing word embedding matrix during
training and inference in a highly efficient way. Our approach achieves a
hundred-fold or more reduction in the space required to store the embeddings
with almost no relative drop in accuracy in practical natural language
processing tasks. | [
"cs.LG",
"stat.ML"
] |
Manipulation tasks in daily life, such as pouring water, unfold intentionally
under specialized manipulation contexts. Being able to process contextual
knowledge in these Activities of Daily Living (ADLs) over time can help us
understand manipulation intentions, which are essential for an intelligent
robot to transition smoothly between various manipulation actions. In this
paper, to model the intended concepts of manipulation, we present a vision
dataset under a strictly constrained knowledge domain for both robot and human
manipulations, where manipulation concepts and relations are stored by an
ontology system in a taxonomic manner. Furthermore, we propose a scheme to
generate a combination of visual attentions and an evolving knowledge graph
filled with commonsense knowledge. Our scheme works with real-world camera
streams and fuses an attention-based Vision-Language model with the ontology
system. The experimental results demonstrate that the proposed scheme can
successfully represent the evolution of an intended object manipulation
procedure for both robots and humans. The proposed scheme allows the robot to
mimic human-like intentional behaviors by watching real-time videos. We aim to
develop this scheme further for real-world robot intelligence in Human-Robot
Interaction. | [
"cs.CV",
"cs.RO"
] |
Multi-agent interacting systems are prevalent in the world, from pure
physical systems to complicated social dynamic systems. In many applications,
effective understanding of the situation and accurate trajectory prediction of
interactive agents play a significant role in downstream tasks, such as
decision making and planning. In this paper, we propose a generic trajectory
forecasting framework (named EvolveGraph) with explicit relational structure
recognition and prediction via latent interaction graphs among multiple
heterogeneous, interactive agents. Considering the uncertainty of future
behaviors, the model is designed to provide multi-modal prediction hypotheses.
Since the underlying interactions may evolve even with abrupt changes, and
different modalities of evolution may lead to different outcomes, we address
the necessity of dynamic relational reasoning and adaptively evolving the
interaction graphs. We also introduce a double-stage training pipeline which
not only improves training efficiency and accelerates convergence, but also
enhances model performance. The proposed framework is evaluated on both
synthetic physics simulations and multiple real-world benchmark datasets in
various areas. The experimental results illustrate that our approach achieves
state-of-the-art performance in terms of prediction accuracy. | [
"cs.CV",
"cs.LG",
"cs.MA",
"cs.RO"
] |
Automatic generation of natural language from images has attracted extensive
attention. In this paper, we take one step further to investigate generation of
poetic language (with multiple lines) to an image for automatic poetry
creation. This task involves multiple challenges, including discovering poetic
clues from the image (e.g., hope from green), and generating poems to satisfy
both relevance to the image and poeticness in language level. To solve the
above challenges, we formulate the task of poem generation into two correlated
sub-tasks by multi-adversarial training via policy gradient, through which the
cross-modal relevance and poetic language style can be ensured. To extract
poetic clues from images, we propose to learn a deep coupled visual-poetic
embedding, in which the poetic representation from objects, sentiments and
scenes in an image can be jointly learned. Two discriminative networks are
further introduced to guide the poem generation, including a multi-modal
discriminator and a poem-style discriminator. To facilitate the research, we
have released two poem datasets by human annotators with two distinct
properties: 1) the first human annotated image-to-poem pair dataset (with 8,292
pairs in total), and 2) to-date the largest public English poem corpus dataset
(with 92,265 different poems in total). Extensive experiments are conducted
with 8K images, among which 1.5K image are randomly picked for evaluation. Both
objective and subjective evaluations show the superior performances against the
state-of-the-art methods for poem generation from images. Turing test carried
out with over 500 human subjects, among which 30 evaluators are poetry experts,
demonstrates the effectiveness of our approach. | [
"cs.CV",
"cs.AI",
"cs.MM"
] |
Inspired by recent successes in neural machine translation and image caption
generation, we present an attention based encoder decoder model (AED) to
recognize Vietnamese Handwritten Text. The model composes of two parts: a
DenseNet for extracting invariant features, and a Long Short-Term Memory
network (LSTM) with an attention model incorporated for generating output text
(LSTM decoder), which are connected from the CNN part to the attention model.
The input of the CNN part is a handwritten text image and the target of the
LSTM decoder is the corresponding text of the input image. Our model is trained
end-to-end to predict the text from a given input image since all the parts are
differential components. In the experiment section, we evaluate our proposed
AED model on the VNOnDB-Word and VNOnDB-Line datasets to verify its efficiency.
The experiential results show that our model achieves 12.30% of word error rate
without using any language model. This result is competitive with the
handwriting recognition system provided by Google in the Vietnamese Online
Handwritten Text Recognition competition. | [
"cs.CV",
"cs.LG"
] |
With the continuous improvement of the performance of object detectors via
advanced model architectures, imbalance problems in the training process have
received more attention. It is a common paradigm in object detection frameworks
to perform multi-scale detection. However, each scale is treated equally during
training. In this paper, we carefully study the objective imbalance of
multi-scale detector training. We argue that the loss in each scale level is
neither equally important nor independent. Different from the existing
solutions of setting multi-task weights, we dynamically optimize the loss
weight of each scale level in the training process. Specifically, we propose an
Adaptive Variance Weighting (AVW) to balance multi-scale loss according to the
statistical variance. Then we develop a novel Reinforcement Learning
Optimization (RLO) to decide the weighting scheme probabilistically during
training. The proposed dynamic methods make better utilization of multi-scale
training loss without extra computational complexity and learnable parameters
for backpropagation. Experiments show that our approaches can consistently
boost the performance over various baseline detectors on Pascal VOC and MS COCO
benchmark. | [
"cs.CV"
] |
The emergence of specialized optimization hardware such as CMOS annealers and
adiabatic quantum computers carries the promise of solving hard combinatorial
optimization problems more efficiently in hardware. Recent work has focused on
formulating different combinatorial optimization problems as Ising models, the
core mathematical abstraction used by a large number of these hardware
platforms, and evaluating the performance of these models when solved on
specialized hardware. An interesting area of application is data mining, where
combinatorial optimization problems underlie many core tasks. In this work, we
focus on consensus clustering (clustering aggregation), an important
combinatorial problem that has received much attention over the last two
decades. We present two Ising models for consensus clustering and evaluate them
using the Fujitsu Digital Annealer, a quantum-inspired CMOS annealer. Our
empirical evaluation shows that our approach outperforms existing techniques
and is a promising direction for future research. | [
"cs.LG",
"math.OC",
"quant-ph",
"stat.ML"
] |
We demonstrate the use of conditional autoregressive generative models (van
den Oord et al., 2016a) over a discrete latent space (van den Oord et al.,
2017b) for forward planning with MCTS. In order to test this method, we
introduce a new environment featuring varying difficulty levels, along with
moving goals and obstacles. The combination of high-quality frame generation
and classical planning approaches nearly matches true environment performance
for our task, demonstrating the usefulness of this method for model-based
planning in dynamic environments. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Reinforcement learning provides a general framework for flexible decision
making and control, but requires extensive data collection for each new task
that an agent needs to learn. In other machine learning fields, such as natural
language processing or computer vision, pre-training on large, previously
collected datasets to bootstrap learning for new tasks has emerged as a
powerful paradigm to reduce data requirements when learning a new task. In this
paper, we ask the following question: how can we enable similarly useful
pre-training for RL agents? We propose a method for pre-training behavioral
priors that can capture complex input-output relationships observed in
successful trials from a wide range of previously seen tasks, and we show how
this learned prior can be used for rapidly learning new tasks without impeding
the RL agent's ability to try out novel behaviors. We demonstrate the
effectiveness of our approach in challenging robotic manipulation domains
involving image observations and sparse reward functions, where our method
outperforms prior works by a substantial margin. | [
"cs.LG",
"cs.RO"
] |
Textures in natural images can be characterized by color, shape, periodicity
of elements within them, and other attributes that can be described using
natural language. In this paper, we study the problem of describing visual
attributes of texture on a novel dataset containing rich descriptions of
textures, and conduct a systematic study of current generative and
discriminative models for grounding language to images on this dataset. We find
that while these models capture some properties of texture, they fail to
capture several compositional properties, such as the colors of dots. We
provide critical analysis of existing models by generating synthetic but
realistic textures with different descriptions. Our dataset also allows us to
train interpretable models and generate language-based explanations of what
discriminative features are learned by deep networks for fine-grained
categorization where texture plays a key role. We present visualizations of
several fine-grained domains and show that texture attributes learned on our
dataset offer improvements over expert-designed attributes on the Caltech-UCSD
Birds dataset. | [
"cs.CV"
] |
Building perceptual systems for robotics which perform well under tight
computational budgets requires novel architectures which rethink the
traditional computer vision pipeline. Modern vision architectures require the
agent to build a summary representation of the entire scene, even if most of
the input is irrelevant to the agent's current goal. In this work, we flip this
paradigm, by introducing EarlyFusion vision models that condition on a goal to
build custom representations for downstream tasks. We show that these goal
specific representations can be learned more quickly, are substantially more
parameter efficient, and more robust than existing attention mechanisms in our
domain. We demonstrate the effectiveness of these methods on a simulated
robotic item retrieval problem that is trained in a fully end-to-end manner via
imitation learning. | [
"cs.CV",
"cs.RO"
] |
This paper presents Pix2Seq, a simple and generic framework for object
detection. Unlike existing approaches that explicitly integrate prior knowledge
about the task, we simply cast object detection as a language modeling task
conditioned on the observed pixel inputs. Object descriptions (e.g., bounding
boxes and class labels) are expressed as sequences of discrete tokens, and we
train a neural net to perceive the image and generate the desired sequence. Our
approach is based mainly on the intuition that if a neural net knows about
where and what the objects are, we just need to teach it how to read them out.
Beyond the use of task-specific data augmentations, our approach makes minimal
assumptions about the task, yet it achieves competitive results on the
challenging COCO dataset, compared to highly specialized and well optimized
detection algorithms. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
Ensemble learning combines several individual models to obtain better
generalization performance. Currently, deep learning models with multilayer
processing architecture is showing better performance as compared to the
shallow or traditional classification models. Deep ensemble learning models
combine the advantages of both the deep learning models as well as the ensemble
learning such that the final model has better generalization performance. This
paper reviews the state-of-art deep ensemble models and hence serves as an
extensive summary for the researchers. The ensemble models are broadly
categorised into ensemble models like bagging, boosting and stacking, negative
correlation based deep ensemble models, explicit/implicit ensembles,
homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised,
semi-supervised, reinforcement learning and online/incremental, multilabel
based deep ensemble models. Application of deep ensemble models in different
domains is also briefly discussed. Finally, we conclude this paper with some
future recommendations and research directions. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Knowledge distillation (KD) is generally considered as a technique for
performing model compression and learned-label smoothing. However, in this
paper, we study and investigate the KD approach from a new perspective: we
study its efficacy in training a deeper network without any residual
connections. We find that in most of the cases, non-residual student networks
perform equally or better than their residual versions trained on raw data
without KD (baseline network). Surprisingly, in some cases, they surpass the
accuracy of baseline networks even with the inferior teachers. After a certain
depth of non-residual student network, the accuracy drop, coming from the
removal of residual connections, is substantial, and training with KD boosts
the accuracy of the student up to a great extent; however, it does not fully
recover the accuracy drop. Furthermore, we observe that the conventional
teacher-student view of KD is incomplete and does not adequately explain our
findings. We propose a novel interpretation of KD with the Trainee-Mentor
hypothesis, which provides a holistic view of KD. We also present two
viewpoints, loss landscape, and feature reuse, to explain the interplay between
residual connections and KD. We substantiate our claims through extensive
experiments on residual networks. | [
"cs.CV",
"I.5.1; I.5.1"
] |
As one type of efficient unsupervised learning methods, clustering algorithms
have been widely used in data mining and knowledge discovery with noticeable
advantages. However, clustering algorithms based on density peak have limited
clustering effect on data with varying density distribution (VDD), equilibrium
distribution (ED), and multiple domain-density maximums (MDDM), leading to the
problems of sparse cluster loss and cluster fragmentation. To address these
problems, we propose a Domain-Adaptive Density Clustering (DADC) algorithm,
which consists of three steps: domain-adaptive density measurement, cluster
center self-identification, and cluster self-ensemble. For data with VDD
features, clusters in sparse regions are often neglected by using uniform
density peak thresholds, which results in the loss of sparse clusters. We
define a domain-adaptive density measurement method based on K-Nearest
Neighbors (KNN) to adaptively detect the density peaks of different density
regions. We treat each data point and its KNN neighborhood as a subgroup to
better reflect its density distribution in a domain view. In addition, for data
with ED or MDDM features, a large number of density peaks with similar values
can be identified, which results in cluster fragmentation. We propose a cluster
center self-identification and cluster self-ensemble method to automatically
extract the initial cluster centers and merge the fragmented clusters.
Experimental results demonstrate that compared with other comparative
algorithms, the proposed DADC algorithm can obtain more reasonable clustering
results on data with VDD, ED and MDDM features. Benefitting from a few
parameter requirements and non-iterative nature, DADC achieves low
computational complexity and is suitable for large-scale data clustering. | [
"cs.LG",
"stat.ML"
] |
We introduce a conceptually simple yet effective model for self-supervised
representation learning with graph data. It follows the previous methods that
generate two views of an input graph through data augmentation. However, unlike
contrastive methods that focus on instance-level discrimination, we optimize an
innovative feature-level objective inspired by classical Canonical Correlation
Analysis. Compared with other works, our approach requires none of the
parameterized mutual information estimator, additional projector, asymmetric
structures, and most importantly, negative samples which can be costly. We show
that the new objective essentially 1) aims at discarding augmentation-variant
information by learning invariant representations, and 2) can prevent
degenerated solutions by decorrelating features in different dimensions. Our
theoretical analysis further provides an understanding for the new objective
which can be equivalently seen as an instantiation of the Information
Bottleneck Principle under the self-supervised setting. Despite its simplicity,
our method performs competitively on seven public graph datasets. | [
"cs.LG"
] |
Scene graph generation aims to provide a semantic and structural description
of an image, denoting the objects (with nodes) and their relationships (with
edges). The best performing works to date are based on exploiting the context
surrounding objects or relations,e.g., by passing information among objects. In
these approaches, to transform the representation of source objects is a
critical process for extracting information for the use by target objects. In
this work, we argue that a source object should give what tar-get object needs
and give different objects different information rather than contributing
common information to all targets. To achieve this goal, we propose a
Target-TailoredSource-Transformation (TTST) method to efficiently propagate
information among object proposals and relations. Particularly, for a source
object proposal which will contribute information to other target objects, we
transform the source object feature to the target object feature domain by
simultaneously taking both the source and target into account. We further
explore more powerful representations by integrating language prior with the
visual context in the transformation for the scene graph generation. By doing
so the target object is able to extract target-specific information from the
source object and source relation accordingly to refine its representation. Our
framework is validated on the Visual Genome bench-mark and demonstrated its
state-of-the-art performance for the scene graph generation. The experimental
results show that the performance of object detection and visual relation-ship
detection are promoted mutually by our method. | [
"cs.CV"
] |
Deep neural networks have demonstrated their superior performance in almost
every Natural Language Processing task, however, their increasing complexity
raises concerns. In particular, these networks require high expenses on
computational hardware, and training budget is a concern for many. Even for a
trained network, the inference phase can be too demanding for
resource-constrained devices, thus limiting its applicability. The
state-of-the-art transformer models are a vivid example. Simplifying the
computations performed by a network is one way of relaxing the complexity
requirements. In this paper, we propose an end to end binarized neural network
architecture for the intent classification task. In order to fully utilize the
potential of end to end binarization, both input representations (vector
embeddings of tokens statistics) and the classifier are binarized. We
demonstrate the efficiency of such architecture on the intent classification of
short texts over three datasets and for text classification with a larger
dataset. The proposed architecture achieves comparable to the state-of-the-art
results on standard intent classification datasets while utilizing ~ 20-40%
lesser memory and training time. Furthermore, the individual components of the
architecture, such as binarized vector embeddings of documents or binarized
classifiers, can be used separately with not necessarily fully binary
architectures. | [
"cs.LG",
"cs.CL"
] |
We introduce Markov Random Geometric Graphs (MRGGs), a growth model for
temporal dynamic networks. It is based on a Markovian latent space dynamic:
consecutive latent points are sampled on the Euclidean Sphere using an unknown
Markov kernel; and two nodes are connected with a probability depending on a
unknown function of their latent geodesic distance. More precisely, at each
stamp-time k we add a latent point X k sampled by jumping from the previous one
X k--1 in a direction chosen uniformly Y k and with a length r k drawn from an
unknown distribution called the latitude function. The connection probabilities
between each pair of nodes are equal to the envelope function of the distance
between these two latent points. We provide theoretical guarantees for the
non-parametric estimation of the latitude and the envelope functions. We
propose an efficient algorithm that achieves those non-parametric estimation
tasks based on an ad-hoc Hierarchical Agglomerative Clustering approach, and we
deploy this analysis on a real data-set given by exchange of messages on a
social network. | [
"cs.LG",
"cs.SI",
"math.ST",
"stat.ML",
"stat.TH"
] |