text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
We propose a framework for top-down salient object detection that
incorporates a tightly coupled image classification module. The classifier is
trained on novel category-aware sparse codes computed on object dictionaries
used for saliency modeling. A misclassification indicates that the
corresponding saliency model is inaccurate. Hence, the classifier selects
images for which the saliency models need to be updated. The category-aware
sparse coding produces better image classification accuracy as compared to
conventional sparse coding with a reduced computational complexity. A
saliency-weighted max-pooling is proposed to improve image classification,
which is further used to refine the saliency maps. Experimental results on
Graz-02 and PASCAL VOC-07 datasets demonstrate the effectiveness of salient
object detection. Although the role of the classifier is to support salient
object detection, we evaluate its performance in image classification and also
illustrate the utility of thresholded saliency maps for image segmentation. | [
"cs.CV"
] |
We study best-arm identification with fixed confidence in bandit models with
graph smoothness constraint. We provide and analyze an efficient gradient
ascent algorithm to compute the sample complexity of this problem as a solution
of a non-smooth max-min problem (providing in passing a simplified analysis for
the unconstrained case). Building on this algorithm, we propose an
asymptotically optimal strategy. We furthermore illustrate by numerical
experiments both the strategy's efficiency and the impact of the smoothness
constraint on the sample complexity. Best Arm Identification (BAI) is an
important challenge in many applications ranging from parameter tuning to
clinical trials. It is now very well understood in vanilla bandit models, but
real-world problems typically involve some dependency between arms that
requires more involved models. Assuming a graph structure on the arms is an
elegant practical way to encompass this phenomenon, but this had been done so
far only for regret minimization. Addressing BAI with graph constraints
involves delicate optimization problems for which the present paper offers a
solution. | [
"cs.LG",
"stat.ML"
] |
The contribution of this paper is two-fold. First, we present ProbCast - a
novel probabilistic model for multivariate time-series forecasting. We employ a
conditional GAN framework to train our model with adversarial training. Second,
we propose a framework that lets us transform a deterministic model into a
probabilistic one with improved performance. The motivation of the framework is
to either transform existing highly accurate point forecast models to their
probabilistic counterparts or to train GANs stably by selecting the
architecture of GAN's component carefully and efficiently. We conduct
experiments over two publicly available datasets namely electricity consumption
dataset and exchange-rate dataset. The results of the experiments demonstrate
the remarkable performance of our model as well as the successful application
of our proposed framework. | [
"cs.LG",
"eess.SP"
] |
One of the challenges for multi-agent reinforcement learning (MARL) is
designing efficient learning algorithms for a large system in which each agent
has only limited or partial information of the entire system. In this system,
it is desirable to learn policies of a decentralized type. A recent and
promising paradigm to analyze such decentralized MARL is to take network
structures into consideration. While exciting progress has been made to analyze
decentralized MARL with the network of agents, often found in social networks
and team video games, little is known theoretically for decentralized MARL with
the network of states, frequently used for modeling self-driving vehicles,
ride-sharing, and data and traffic routing.
This paper proposes a framework called localized training and decentralized
execution to study MARL with network of states, with homogeneous (a.k.a.
mean-field type) agents. Localized training means that agents only need to
collect local information in their neighboring states during the training
phase; decentralized execution implies that, after the training stage, agents
can execute the learned decentralized policies, which only requires knowledge
of the agents' current states. The key idea is to utilize the homogeneity of
agents and regroup them according to their states, thus the formulation of a
networked Markov decision process with teams of agents, enabling the update of
the Q-function in a localized fashion. In order to design an efficient and
scalable reinforcement learning algorithm under such a framework, we adopt the
actor-critic approach with over-parameterized neural networks, and establish
the convergence and sample complexity for our algorithm, shown to be scalable
with respect to the size of both agents and states. | [
"cs.LG",
"math.OC"
] |
In recent years, deep learning (DL) methods have become powerful tools for
biomedical image segmentation. However, high annotation efforts and costs are
commonly needed to acquire sufficient biomedical training data for DL models.
To alleviate the burden of manual annotation, in this paper, we propose a new
weakly supervised DL approach for biomedical image segmentation using boxes
only annotation. First, we develop a method to combine graph search (GS) and DL
to generate fine object masks from box annotation, in which DL uses box
annotation to compute a rough segmentation for GS and then GS is applied to
locate the optimal object boundaries. During the mask generation process, we
carefully utilize information from box annotation to filter out potential
errors, and then use the generated masks to train an accurate DL segmentation
network. Extensive experiments on gland segmentation in histology images, lymph
node segmentation in ultrasound images, and fungus segmentation in electron
microscopy images show that our approach attains superior performance over the
best known state-of-the-art weakly supervised DL method and is able to achieve
(1) nearly the same accuracy compared to fully supervised DL methods with far
less annotation effort, (2) significantly better results with similar
annotation time, and (3) robust performance in various applications. | [
"cs.CV"
] |
Predicting not only the target but also an accurate measure of uncertainty is
important for many machine learning applications and in particular
safety-critical ones. In this work we study the calibration of uncertainty
prediction for regression tasks which often arise in real-world systems. We
show that the existing definition for calibration of a regression uncertainty
[Kuleshov et al. 2018] has severe limitations in distinguishing informative
from non-informative uncertainty predictions. We propose a new definition that
escapes this caveat and an evaluation method using a simple histogram-based
approach. Our method clusters examples with similar uncertainty prediction and
compares the prediction with the empirical uncertainty on these examples. We
also propose a simple, scaling-based calibration method that preforms as well
as much more complex ones. We show results on both a synthetic, controlled
problem and on the object detection bounding-box regression task using the COCO
and KITTI datasets. | [
"cs.LG",
"stat.ML"
] |
Epipolar constraints are at the core of feature matching and depth estimation
in current multi-person multi-camera 3D human pose estimation methods. Despite
the satisfactory performance of this formulation in sparser crowd scenes, its
effectiveness is frequently challenged under denser crowd circumstances mainly
due to two sources of ambiguity. The first is the mismatch of human joints
resulting from the simple cues provided by the Euclidean distances between
joints and epipolar lines. The second is the lack of robustness from the naive
formulation of the problem as a least squares minimization. In this paper, we
depart from the multi-person 3D pose estimation formulation, and instead
reformulate it as crowd pose estimation. Our method consists of two key
components: a graph model for fast cross-view matching, and a maximum a
posteriori (MAP) estimator for the reconstruction of the 3D human poses. We
demonstrate the effectiveness and superiority of our proposed method on four
benchmark datasets. | [
"cs.CV"
] |
Generating realistic biometric images has been an interesting and, at the
same time, challenging problem. Classical statistical models fail to generate
realistic-looking fingerprint images, as they are not powerful enough to
capture the complicated texture representation in fingerprint images. In this
work, we present a machine learning framework based on generative adversarial
networks (GAN), which is able to generate fingerprint images sampled from a
prior distribution (learned from a set of training images). We also add a
suitable regularization term to the loss function, to impose the connectivity
of generated fingerprint images. This is highly desirable for fingerprints, as
the lines in each finger are usually connected. We apply this framework to two
popular fingerprint databases, and generate images which look very realistic,
and similar to the samples in those databases. Through experimental results, we
show that the generated fingerprint images have a good diversity, and are able
to capture different parts of the prior distribution. We also evaluate the
Frechet Inception distance (FID) of our proposed model, and show that our model
is able to achieve good quantitative performance in terms of this score. | [
"cs.CV",
"cs.LG"
] |
Deep generative models are challenging the classical methods in the field of
anomaly detection nowadays. Every new method provides evidence of outperforming
its predecessors, often with contradictory results. The objective of this
comparison is twofold: to compare anomaly detection methods of various
paradigms with focus on deep generative models, and identification of sources
of variability that can yield different results. The methods were compared on
popular tabular and image datasets. We identified the main sources of
variability to be experimental conditions: i) the type data set (tabular or
image) and the nature of anomalies (statistical or semantic), and ii) strategy
of selection of hyperparameters, especially the number of available anomalies
in the validation set. Different methods perform the best in different
contexts, i.e. combination of experimental conditions together with
computational time. This explains the variability of the previous results and
highlights the importance of careful specification of the context in the
publication of a new method. All our code and results are available for
download. | [
"cs.LG"
] |
Recently, Vision Transformers (ViTs) have shown competitive performance on
image recognition while requiring less vision-specific inductive biases. In
this paper, we investigate if such observation can be extended to image
generation. To this end, we integrate the ViT architecture into generative
adversarial networks (GANs). We observe that existing regularization methods
for GANs interact poorly with self-attention, causing serious instability
during training. To resolve this issue, we introduce novel regularization
techniques for training GANs with ViTs. Empirically, our approach, named
ViTGAN, achieves comparable performance to state-of-the-art CNN-based StyleGAN2
on CIFAR-10, CelebA, and LSUN bedroom datasets. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Deep convolution neural network has attracted many attentions in large-scale
visual classification task, and achieves significant performance improvement
compared to traditional visual analysis methods. In this paper, we explore many
kinds of deep convolution neural network architectures for large-scale product
recognition task, which is heavily class-imbalanced and noisy labeled data,
making it more challenged. Extensive experiments show that PNASNet achieves
best performance among a variety of convolutional architectures. Together with
ensemble technology and negative learning loss for noisy labeled data, we
further improve the model performance on online test data. Finally, our
proposed method achieves 0.1515 mean top-1 error on online test data. | [
"cs.CV"
] |
Machine learning has been widely adopted for medical image analysis in recent
years given its promising performance in image segmentation and classification
tasks. As a data-driven science, the success of machine learning, in particular
supervised learning, largely depends on the availability of manually annotated
datasets. For medical imaging applications, such annotated datasets are not
easy to acquire. It takes a substantial amount of time and resource to curate
an annotated medical image set. In this paper, we propose an efficient
annotation framework for brain tumour images that is able to suggest
informative sample images for human experts to annotate. Our experiments show
that training a segmentation model with only 19% suggestively annotated patient
scans from BraTS 2019 dataset can achieve a comparable performance to training
a model on the full dataset for whole tumour segmentation task. It demonstrates
a promising way to save manual annotation cost and improve data efficiency in
medical imaging applications. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We address the problem of text-guided video temporal grounding, which aims to
identify the time interval of certain event based on a natural language
description. Different from most existing methods that only consider RGB images
as visual features, we propose a multi-modal framework to extract complementary
information from videos. Specifically, we adopt RGB images for appearance,
optical flow for motion, and depth maps for image structure. While RGB images
provide abundant visual cues of certain event, the performance may be affected
by background clutters. Therefore, we use optical flow to focus on large motion
and depth maps to infer the scene configuration when the action is related to
objects recognizable with their shapes. To integrate the three modalities more
effectively and enable inter-modal learning, we design a dynamic fusion scheme
with transformers to model the interactions between modalities. Furthermore, we
apply intra-modal self-supervised learning to enhance feature representations
across videos for each modality, which also facilitates multi-modal learning.
We conduct extensive experiments on the Charades-STA and ActivityNet Captions
datasets, and show that the proposed method performs favorably against
state-of-the-art approaches. | [
"cs.CV"
] |
Animals excel at adapting their intentions, attention, and actions to the
environment, making them remarkably efficient at interacting with a rich,
unpredictable and ever-changing external world, a property that intelligent
machines currently lack. Such an adaptation property relies heavily on cellular
neuromodulation, the biological mechanism that dynamically controls intrinsic
properties of neurons and their response to external stimuli in a
context-dependent manner. In this paper, we take inspiration from cellular
neuromodulation to construct a new deep neural network architecture that is
specifically designed to learn adaptive behaviours. The network adaptation
capabilities are tested on navigation benchmarks in a meta-reinforcement
learning context and compared with state-of-the-art approaches. Results show
that neuromodulation is capable of adapting an agent to different tasks and
that neuromodulation-based approaches provide a promising way of improving
adaptation of artificial systems. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
The need for large annotated image datasets for training Convolutional Neural
Networks (CNNs) has been a significant impediment for their adoption in
computer vision applications. We show that with transfer learning an effective
object detector can be trained almost entirely on synthetically rendered
datasets. We apply this strategy for detecting pack- aged food products
clustered in refrigerator scenes. Our CNN trained only with 4000 synthetic
images achieves mean average precision (mAP) of 24 on a test set with 55
distinct products as objects of interest and 17 distractor objects. A further
increase of 12% in the mAP is obtained by adding only 400 real images to these
4000 synthetic images in the training set. A high degree of photorealism in the
synthetic images was not essential in achieving this performance. We analyze
factors like training data set size and 3D model dictionary size for their
influence on detection performance. Additionally, training strategies like
fine-tuning with selected layers and early stopping which affect transfer
learning from synthetic scenes to real scenes are explored. Training CNNs with
synthetic datasets is a novel application of high-performance computing and a
promising approach for object detection applications in domains where there is
a dearth of large annotated image data. | [
"cs.CV"
] |
We propose a novel couple mappings method for low resolution face recognition
using deep convolutional neural networks (DCNNs). The proposed architecture
consists of two branches of DCNNs to map the high and low resolution face
images into a common space with nonlinear transformations. The branch
corresponding to transformation of high resolution images consists of 14 layers
and the other branch which maps the low resolution face images to the common
space includes a 5-layer super-resolution network connected to a 14-layer
network. The distance between the features of corresponding high and low
resolution images are backpropagated to train the networks. Our proposed method
is evaluated on FERET data set and compared with state-of-the-art competing
methods. Our extensive experimental results show that the proposed method
significantly improves the recognition performance especially for very low
resolution probe face images (11.4% improvement in recognition accuracy).
Furthermore, it can reconstruct a high resolution image from its corresponding
low resolution probe image which is comparable with state-of-the-art
super-resolution methods in terms of visual quality. | [
"cs.CV"
] |
Producing manual, pixel-accurate, image segmentation labels is tedious and
time-consuming. This is often a rate-limiting factor when large amounts of
labeled images are required, such as for training deep convolutional networks
for instrument-background segmentation in surgical scenes. No large datasets
comparable to industry standards in the computer vision community are available
for this task. To circumvent this problem, we propose to automate the creation
of a realistic training dataset by exploiting techniques stemming from special
effects and harnessing them to target training performance rather than visual
appeal. Foreground data is captured by placing sample surgical instruments over
a chroma key (a.k.a. green screen) in a controlled environment, thereby making
extraction of the relevant image segment straightforward. Multiple lighting
conditions and viewpoints can be captured and introduced in the simulation by
moving the instruments and camera and modulating the light source. Background
data is captured by collecting videos that do not contain instruments. In the
absence of pre-existing instrument-free background videos, minimal labeling
effort is required, just to select frames that do not contain surgical
instruments from videos of surgical interventions freely available online. We
compare different methods to blend instruments over tissue and propose a novel
data augmentation approach that takes advantage of the plurality of options. We
show that by training a vanilla U-Net on semi-synthetic data only and applying
a simple post-processing, we are able to match the results of the same network
trained on a publicly available manually labeled real dataset. | [
"cs.CV"
] |
In this paper we introduce the DMR -- a prototype-based method and network
architecture for deep learning which is using a decision tree (DT)-based
inference and synthetic data to balance the classes. It builds upon the
recently introduced xDNN method addressing more complex multi-class problems,
specifically when classes are highly imbalanced. DMR moves away from a direct
decision based on all classes towards a layered DT of pair-wise class
comparisons. In addition, it forces the prototypes to be balanced between
classes regardless of possible class imbalances of the training data. It has
two novel mechanisms, namely i) using a DT to determine the winning class
label, and ii) balancing the classes by synthesizing data around the prototypes
determined from the available training data. As a result, we improved
significantly the performance of the resulting fully explainable DNN as
evidenced by the best reported result on the well know benchmark problem
Caltech-101 surpassing our own recently published "world record". Furthermore,
we also achieved another "world record" for another very hard benchmark
problem, namely Caltech-256 as well as surpassed the results of other
approaches on Faces-1999 problem. In summary, we propose a new approach
specifically advantageous for imbalanced multi-class problems that achieved two
world records on well known hard benchmark problems and the best result on
another problem in terms of accuracy. Moreover, DMR offers full explainability,
does not require GPUs and can continue to learn from new data by adding new
prototypes preserving the previous ones but not requiring full retraining. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We propose a unified game-theoretical framework to perform classification and
conditional image generation given limited supervision. It is formulated as a
three-player minimax game consisting of a generator, a classifier and a
discriminator, and therefore is referred to as Triple Generative Adversarial
Network (Triple-GAN). The generator and the classifier characterize the
conditional distributions between images and labels to perform conditional
generation and classification, respectively. The discriminator solely focuses
on identifying fake image-label pairs. Under a nonparametric assumption, we
prove the unique equilibrium of the game is that the distributions
characterized by the generator and the classifier converge to the data
distribution. As a byproduct of the three-player mechanism, Triple-GAN is
flexible to incorporate different semi-supervised classifiers and GAN
architectures. We evaluate Triple-GAN in two challenging settings, namely,
semi-supervised learning and the extreme low data regime. In both settings,
Triple-GAN can achieve excellent classification results and generate meaningful
samples in a specific class simultaneously. In particular, using a commonly
adopted 13-layer CNN classifier, Triple-GAN outperforms extensive
semi-supervised learning methods substantially on more than 10 benchmarks no
matter data augmentation is applied or not. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
2D image-based virtual try-on has attracted increased attention from the
multimedia and computer vision communities. However, most of the existing
image-based virtual try-on methods directly put both person and the in-shop
clothing representations together, without considering the mutual correlation
between them. What is more, the long-range information, which is crucial for
generating globally consistent results, is also hard to be established via the
regular convolution operation. To alleviate these two problems, in this paper
we propose a novel two-stage Cloth Interactive Transformer (CIT) for virtual
try-on. In the first stage, we design a CIT matching block, aiming to perform a
learnable thin-plate spline transformation that can capture more reasonable
long-range relation. As a result, the warped in-shop clothing looks more
natural. In the second stage, we propose a novel CIT reasoning block for
establishing the global mutual interactive dependence. Based on this mutual
dependence, the significant region within the input data can be highlighted,
and consequently, the try-on results can become more realistic. Extensive
experiments on a public fashion dataset demonstrate that our CIT can achieve
the new state-of-the-art virtual try-on performance both qualitatively and
quantitatively. The source code and trained models are available at
https://github.com/Amazingren/CIT. | [
"cs.CV"
] |
We put forward a novel learning methodology for ensembles of decision trees
based on a genetic algorithm which is able to train a decision tree for
maximizing both its accuracy and its robustness to adversarial perturbations.
This learning algorithm internally leverages a complete formal verification
technique for robustness properties of decision trees based on abstract
interpretation, a well known static program analysis technique. We implemented
this genetic adversarial training algorithm in a tool called Meta-Silvae (MS)
and we experimentally evaluated it on some reference datasets used in
adversarial training. The experimental results show that MS is able to train
robust models that compete with and often improve on the current
state-of-the-art of adversarial training of decision trees while being much
more compact and therefore interpretable and efficient tree models. | [
"cs.LG"
] |
Automatic image segmentation becomes very crucial for tumor detection in
medical image processing.In general, manual and semi automatic segmentation
techniques require more time and knowledge. However these drawbacks had
overcome by automatic segmentation still there needs to develop more
appropriate techniques for medical image segmentation. Therefore, we proposed
hybrid approach based image segmentation using the combined features of region
growing and threshold based segmentation techniques. It is followed by
pre-processing stage to provide an accurate brain tumor extraction by the help
of Magnetic Resonance Imaging (MRI). If the tumor has holes, the region growing
segmentation algorithm cannot reveal but the proposed hybrid segmentation
technique can be achieved and the result as well improved. Hence the result
used to made assessment with the various performance measures as DICE, JACCARD
similarity, accuracy, sensitivity and specificity. These similarity measures
have been extensively used for evaluation with the ground truth of each
processed image and its results are compared and analyzed. | [
"cs.CV"
] |
Recent research has shown that map raw pixels from a single front-facing
camera directly to steering commands are surprisingly powerful. This paper
presents a convolutional neural network (CNN) to playing the CarRacing-v0 using
imitation learning in OpenAI Gym. The dataset is generated by playing the game
manually in Gym and used a data augmentation method to expand the dataset to 4
times larger than before. Also, we read the true speed, four ABS sensors,
steering wheel position, and gyroscope for each image and designed a mixed
model by combining the sensor input and image input. After training, this model
can automatically detect the boundaries of road features and drive the robot
like a human. By comparing with AlexNet and VGG16 using the average reward in
CarRacing-v0, our model wins the maximum overall system performance. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
We propose NeRF-VAE, a 3D scene generative model that incorporates geometric
structure via NeRF and differentiable volume rendering. In contrast to NeRF,
our model takes into account shared structure across scenes, and is able to
infer the structure of a novel scene -- without the need to re-train -- using
amortized inference. NeRF-VAE's explicit 3D rendering process further contrasts
previous generative models with convolution-based rendering which lacks
geometric structure. Our model is a VAE that learns a distribution over
radiance fields by conditioning them on a latent scene representation. We show
that, once trained, NeRF-VAE is able to infer and render
geometrically-consistent scenes from previously unseen 3D environments using
very few input images. We further demonstrate that NeRF-VAE generalizes well to
out-of-distribution cameras, while convolutional models do not. Finally, we
introduce and study an attention-based conditioning mechanism of NeRF-VAE's
decoder, which improves model performance. | [
"stat.ML",
"cs.LG"
] |
The problem of minimization of the number of measurements needed for digital
image acquisition and reconstruction with a given accuracy is addressed. Basics
of the sampling theory are outlined to show that the lower bound of signal
sampling rate sufficient for signal reconstruction with a given accuracy is
equal to the spectrum sparsity of the signal sparse approximation that has this
accuracy. It is revealed that the compressed sensing approach, which was
advanced as a solution to the sampling rate minimization problem, is far from
reaching the sampling rate theoretical minimum. Potentials and limitations of
compressed sensing are demystified using a simple and intutive model, A method
of image Arbitrary Sampling and Bounded Spectrum Reconstruction (ASBSR-method)
is described that allows to draw near the image sampling rate theoretical
minimum. Presented and discussed are also results of experimental verification
of the ASBSR-method and its possible applicability extensions to solving
various underdetermined inverse problems such as color image demosaicing, image
in-painting, image reconstruction from their sparsely sampled or decimated
projections, image reconstruction from the modulus of its Fourier spectrum, and
image reconstruction from its sparse samples in Fourier domain | [
"cs.CV",
"eess.IV"
] |
While convolutional neural networks (CNNs) trained by back-propagation have
seen unprecedented success at semantic segmentation tasks, they are known to
struggle on out-of-distribution data. Markov random fields (MRFs) on the other
hand, encode simpler distributions over labels that, although less flexible
than UNets, are less prone to over-fitting. In this paper, we propose to fuse
both strategies by computing the product of distributions of a UNet and an MRF.
As this product is intractable, we solve for an approximate distribution using
an iterative mean-field approach. The resulting MRF-UNet is trained jointly by
back-propagation. Compared to other works using conditional random fields
(CRFs), the MRF has no dependency on the imaging data, which should allow for
less over-fitting. We show on 3D neuroimaging data that this novel network
improves generalisation to out-of-distribution samples. Furthermore, it allows
the overall number of parameters to be reduced while preserving high accuracy.
These results suggest that a classic MRF smoothness prior can allow for less
over-fitting when principally integrated into a CNN model. Our implementation
is available at https://github.com/balbasty/nitorch. | [
"cs.CV",
"eess.IV"
] |
Mechanisms of human color vision are characterized by two phenomenological
aspects: the system is nonlinear and adaptive to changing environments.
Conventional attempts to derive these features from statistics use separate
arguments for each aspect. The few statistical approaches that do consider both
phenomena simultaneously follow parametric formulations based on empirical
models. Therefore, it may be argued that the behavior does not come directly
from the color statistics but from the convenient functional form adopted. In
addition, many times the whole statistical analysis is based on simplified
databases that disregard relevant physical effects in the input signal, as for
instance by assuming flat Lambertian surfaces. Here we address the simultaneous
statistical explanation of (i) the nonlinear behavior of achromatic and
chromatic mechanisms in a fixed adaptation state, and (ii) the change of such
behavior. Both phenomena emerge directly from the samples through a single
data-driven method: the Sequential Principal Curves Analysis (SPCA) with local
metric. SPCA is a new manifold learning technique to derive a set of sensors
adapted to the manifold using different optimality criteria. A new database of
colorimetrically calibrated images of natural objects under these illuminants
was collected. The results obtained by applying SPCA show that the
psychophysical behavior on color discrimination thresholds, discount of the
illuminant and corresponding pairs in asymmetric color matching, emerge
directly from realistic data regularities assuming no a priori functional form.
These results provide stronger evidence for the hypothesis of a statistically
driven organization of color sensors. Moreover, the obtained results suggest
that color perception at this low abstraction level may be guided by an error
minimization strategy rather than by the information maximization principle. | [
"stat.ML",
"q-bio.NC"
] |
We propose a new perspective on video understanding by casting the video
recognition problem as an image recognition task. We show that an image
classifier alone can suffice for video understanding without temporal modeling.
Our approach is simple and universal. It composes input frames into a super
image to train an image classifier to fulfill the task of action recognition,
in exactly the same way as classifying an image. We prove the viability of such
an idea by demonstrating strong and promising performance on four public
datasets including Kinetics400, Something-to-something (V2), MiT and Jester,
using a recently developed vision transformer. We also experiment with the
prevalent ResNet image classifiers in computer vision to further validate our
idea. The results on Kinetics400 are comparable to some of the best-performed
CNN approaches based on spatio-temporal modeling. our code and models will be
made available at https://github.com/IBM/sifar-pytorch. | [
"cs.CV"
] |
Egocentric action anticipation consists in understanding which objects the
camera wearer will interact with in the near future and which actions they will
perform. We tackle the problem proposing an architecture able to anticipate
actions at multiple temporal scales using two LSTMs to 1) summarize the past,
and 2) formulate predictions about the future. The input video is processed
considering three complimentary modalities: appearance (RGB), motion (optical
flow) and objects (object-based features). Modality-specific predictions are
fused using a novel Modality ATTention (MATT) mechanism which learns to weigh
modalities in an adaptive fashion. Extensive evaluations on two large-scale
benchmark datasets show that our method outperforms prior art by up to +7% on
the challenging EPIC-Kitchens dataset including more than 2500 actions, and
generalizes to EGTEA Gaze+. Our approach is also shown to generalize to the
tasks of early action recognition and action recognition. Our method is ranked
first in the public leaderboard of the EPIC-Kitchens egocentric action
anticipation challenge 2019. Please see our web pages for code and examples:
http://iplab.dmi.unict.it/rulstm - https://github.com/fpv-iplab/rulstm. | [
"cs.CV",
"cs.AI"
] |
A variety of recent works, spanning pruning, lottery tickets, and training
within random subspaces, have shown that deep neural networks can be trained
using far fewer degrees of freedom than the total number of parameters. We
explain this phenomenon by first examining the success probability of hitting a
training loss sub-level set when training within a random subspace of a given
training dimensionality. We find a sharp phase transition in the success
probability from $0$ to $1$ as the training dimension surpasses a threshold.
This threshold training dimension increases as the desired final loss
decreases, but decreases as the initial loss decreases. We then theoretically
explain the origin of this phase transition, and its dependence on
initialization and final desired loss, in terms of precise properties of the
high dimensional geometry of the loss landscape. In particular, we show via
Gordon's escape theorem, that the training dimension plus the Gaussian width of
the desired loss sub-level set, projected onto a unit sphere surrounding the
initialization, must exceed the total number of parameters for the success
probability to be large. In several architectures and datasets, we measure the
threshold training dimension as a function of initialization and demonstrate
that it is a small fraction of the total number of parameters, thereby
implying, by our theory, that successful training with so few dimensions is
possible precisely because the Gaussian width of low loss sub-level sets is
very large. Moreover, this threshold training dimension provides a strong null
model for assessing the efficacy of more sophisticated ways to reduce training
degrees of freedom, including lottery tickets as well a more optimal method we
introduce: lottery subspaces. | [
"cs.LG",
"stat.ML"
] |
Standard frame-based cameras that sample light intensity frames are heavily
impacted by motion blur for high-speed motion and fail to perceive scene
accurately when the dynamic range is high. Event-based cameras, on the other
hand, overcome these limitations by asynchronously detecting the variation in
individual pixel intensities. However, event cameras only provide information
about pixels in motion, leading to sparse data. Hence, estimating the overall
dense behavior of pixels is difficult. To address such issues associated with
the sensors, we present Fusion-FlowNet, a sensor fusion framework for
energy-efficient optical flow estimation using both frame- and event-based
sensors, leveraging their complementary characteristics. Our proposed network
architecture is also a fusion of Spiking Neural Networks (SNNs) and Analog
Neural Networks (ANNs) where each network is designed to simultaneously process
asynchronous event streams and regular frame-based images, respectively. Our
network is end-to-end trained using unsupervised learning to avoid expensive
video annotations. The method generalizes well across distinct environments
(rapid motion and challenging lighting conditions) and demonstrates
state-of-the-art optical flow prediction on the Multi-Vehicle Stereo Event
Camera (MVSEC) dataset. Furthermore, our network offers substantial savings in
terms of the number of network parameters and computational energy cost. | [
"cs.CV",
"cs.NE"
] |
In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | [
"cs.CV"
] |
The number of possible methods of generalizing binary classification to
multi-class classification increases exponentially with the number of class
labels. Often, the best method of doing so will be highly problem dependent.
Here we present classification software in which the partitioning of
multi-class classification problems into binary classification problems is
specified using a recursive control language. | [
"stat.ML",
"cs.LG"
] |
By integrating dynamics models into model-free reinforcement learning (RL)
methods, model-based value expansion (MVE) algorithms have shown a significant
advantage in sample efficiency as well as value estimation. However, these
methods suffer from higher function approximation errors than model-free
methods in stochastic environments due to a lack of modeling the environmental
randomness. As a result, their performance lags behind the best model-free
algorithms in some challenging scenarios. In this paper, we propose a novel
Hybrid-RL method that builds on MVE, namely the Risk Averse Value Expansion
(RAVE). With imaginative rollouts generated by an ensemble of probabilistic
dynamics models, we further introduce the aversion of risks by seeking the
lower confidence bound of the estimation. Experiments on a range of challenging
environments show that by modeling the uncertainty completely, RAVE
substantially enhances the robustness of previous model-based methods, and
yields state-of-the-art performance. With this technique, our solution gets the
first place in NeurIPS 2019: Learn to Move. | [
"cs.LG",
"cs.AI"
] |
Recent analyses of certain gradient descent optimization methods have shown
that performance can degrade in some settings - such as with stochasticity or
implicit momentum. In deep reinforcement learning (Deep RL), such optimization
methods are often used for training neural networks via the temporal difference
error or policy gradient. As an agent improves over time, the optimization
target changes and thus the loss landscape (and local optima) change. Due to
the failure modes of those methods, the ideal choice of optimizer for Deep RL
remains unclear. As such, we provide an empirical analysis of the effects that
a wide range of gradient descent optimizers and their hyperparameters have on
policy gradient methods, a subset of Deep RL algorithms, for benchmark
continuous control tasks. We find that adaptive optimizers have a narrow window
of effective learning rates, diverging in other cases, and that the
effectiveness of momentum varies depending on the properties of the
environment. Our analysis suggests that there is significant interplay between
the dynamics of the environment and Deep RL algorithm properties which aren't
necessarily accounted for by traditional adaptive gradient methods. We provide
suggestions for optimal settings of current methods and further lines of
research based on our findings. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Real-time scene understanding has become crucial in many applications such as
autonomous driving. In this paper, we propose a deep architecture, called
BlitzNet, that jointly performs object detection and semantic segmentation in
one forward pass, allowing real-time computations. Besides the computational
gain of having a single network to perform several tasks, we show that object
detection and semantic segmentation benefit from each other in terms of
accuracy. Experimental results for VOC and COCO datasets show state-of-the-art
performance for object detection and segmentation among real time systems. | [
"cs.CV"
] |
This paper presents novel techniques for recovering 3D dense scene flow,
based on differential analysis of 4D light fields. The key enabling result is a
per-ray linear equation, called the ray flow equation, that relates 3D scene
flow to 4D light field gradients. The ray flow equation is invariant to 3D
scene structure and applicable to a general class of scenes, but is
under-constrained (3 unknowns per equation). Thus, additional constraints must
be imposed to recover motion. We develop two families of scene flow algorithms
by leveraging the structural similarity between ray flow and optical flow
equations: local 'Lucas-Kanade' ray flow and global 'Horn-Schunck' ray flow,
inspired by corresponding optical flow methods. We also develop a combined
local-global method by utilizing the correspondence structure in the light
fields. We demonstrate high precision 3D scene flow recovery for a wide range
of scenarios, including rotation and non-rigid motion. We analyze the
theoretical and practical performance limits of the proposed techniques via the
light field structure tensor, a 3x3 matrix that encodes the local structure of
light fields. We envision that the proposed analysis and algorithms will lead
to design of future light-field cameras that are optimized for motion sensing,
in addition to depth sensing. | [
"cs.CV",
"eess.IV",
"I.4.8"
] |
Current image transformation and recoloring algorithms try to introduce
artistic effects in the photographed images, based on user input of target
image(s) or selection of pre-designed filters. These manipulations, although
intended to enhance the impact of an image on the viewer, do not include the
option of image transformation by specifying the affect information. In this
paper we present an automatic image-transformation method that transforms the
source image such that it can induce an emotional affect on the viewer, as
desired by the user. Our proposed novel image emotion transfer algorithm does
not require a user-specified target image. The proposed algorithm uses features
extracted from top layers of deep convolutional neural network and the
user-specified emotion distribution to select multiple target images from an
image database for color transformation, such that the resultant image has
desired emotional impact. Our method can handle more diverse set of photographs
than the previous methods. We conducted a detailed user study showing the
effectiveness of our proposed method. A discussion and reasoning of failure
cases has also been provided, indicating inherent limitation of color-transfer
based methods in the use of emotion assignment.
Project Page: http://im.itu.edu.pk/affective-image-transfer/ | [
"cs.CV"
] |
Learning a good representation is an essential component for deep
reinforcement learning (RL). Representation learning is especially important in
multitask and partially observable settings where building a representation of
the unknown environment is crucial to solve the tasks. Here we introduce
Prediction of Bootstrap Latents (PBL), a simple and flexible self-supervised
representation learning algorithm for multitask deep RL. PBL builds on
multistep predictive representations of future observations, and focuses on
capturing structured information about environment dynamics. Specifically, PBL
trains its representation by predicting latent embeddings of future
observations. These latent embeddings are themselves trained to be predictive
of the aforementioned representations. These predictions form a bootstrapping
effect, allowing the agent to learn more about the key aspects of the
environment dynamics. In addition, by defining prediction tasks completely in
latent space, PBL provides the flexibility of using multimodal observations
involving pixel images, language instructions, rewards and more. We show in our
experiments that PBL delivers across-the-board improved performance over state
of the art deep RL agents in the DMLab-30 and Atari-57 multitask setting. | [
"cs.LG",
"cs.AI"
] |
Deep learning has seen a movement away from representing examples with a
monolithic hidden state towards a richly structured state. For example,
Transformers segment by position, and object-centric architectures decompose
images into entities. In all these architectures, interactions between
different elements are modeled via pairwise interactions: Transformers make use
of self-attention to incorporate information from other positions;
object-centric architectures make use of graph neural networks to model
interactions among entities. However, pairwise interactions may not achieve
global coordination or a coherent, integrated representation that can be used
for downstream tasks. In cognitive science, a global workspace architecture has
been proposed in which functionally specialized components share information
through a common, bandwidth-limited communication channel. We explore the use
of such a communication channel in the context of deep learning for modeling
the structure of complex environments. The proposed method includes a shared
workspace through which communication among different specialist modules takes
place but due to limits on the communication bandwidth, specialist modules must
compete for access. We show that capacity limitations have a rational basis in
that (1) they encourage specialization and compositionality and (2) they
facilitate the synchronization of otherwise independent specialists. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We present Language-binding Object Graph Network, the first neural reasoning
method with dynamic relational structures across both visual and textual
domains with applications in visual question answering. Relaxing the common
assumption made by current models that the object predicates pre-exist and stay
static, passive to the reasoning process, we propose that these dynamic
predicates expand across the domain borders to include pair-wise
visual-linguistic object binding. In our method, these contextualized object
links are actively found within each recurrent reasoning step without relying
on external predicative priors. These dynamic structures reflect the
conditional dual-domain object dependency given the evolving context of the
reasoning through co-attention. Such discovered dynamic graphs facilitate
multi-step knowledge combination and refinements that iteratively deduce the
compact representation of the final answer. The effectiveness of this model is
demonstrated on image question answering demonstrating favorable performance on
major VQA datasets. Our method outperforms other methods in sophisticated
question-answering tasks wherein multiple object relations are involved. The
graph structure effectively assists the progress of training, and therefore the
network learns efficiently compared to other reasoning models. | [
"cs.CV",
"cs.LG"
] |
We propose graph-based predictable feature analysis (GPFA), a new method for
unsupervised learning of predictable features from high-dimensional time
series, where high predictability is understood very generically as low
variance in the distribution of the next data point given the previous ones. We
show how this measure of predictability can be understood in terms of graph
embedding as well as how it relates to the information-theoretic measure of
predictive information in special cases. We confirm the effectiveness of GPFA
on different datasets, comparing it to three existing algorithms with similar
objectives---namely slow feature analysis, forecastable component analysis, and
predictable feature analysis---to which GPFA shows very competitive results. | [
"cs.LG"
] |
This paper presents a methodology and workflow that overcome the limitations
of the conventional Generative Adversarial Networks (GANs) for geological
facies modeling. It attempts to improve the training stability and guarantee
the diversity of the generated geology through interpretable latent vectors.
The resulting samples are ensured to have the equal probability (or an unbiased
distribution) as from the training dataset. This is critical when applying GANs
to generate unbiased and representative geological models that can be further
used to facilitate objective uncertainty evaluation and optimal decision-making
in oil field exploration and development.
We proposed and implemented a new variant of GANs called Info-WGAN for the
geological facies modeling that combines Information Maximizing Generative
Adversarial Network (InfoGAN) with Wasserstein distance and Gradient Penalty
(GP) for learning interpretable latent codes as well as generating stable and
unbiased distribution from the training data. Different from the original GAN
design, InfoGAN can use the training images with full, partial, or no labels to
perform disentanglement of the complex sedimentary types exhibited in the
training dataset to achieve the variety and diversity of the generated samples.
This is accomplished by adding additional categorical variables that provide
disentangled semantic representations besides the mere randomized latent vector
used in the original GANs. By such means, a regularization term is used to
maximize the mutual information between such latent categorical codes and the
generated geological facies in the loss function.
Furthermore, the resulting unbiased sampling by Info-WGAN makes the data
conditioning much easier than the conventional GANs in geological modeling
because of the variety and diversity as well as the equal probability of the
unconditional sampling by the generator. | [
"cs.LG",
"eess.IV",
"physics.comp-ph",
"stat.ML"
] |
Combined variations containing low-resolution and occlusion often present in
face images in the wild, e.g., under the scenario of video surveillance. While
most of the existing face image recovery approaches can handle only one type of
variation per model, in this work, we propose a deep generative adversarial
network (FCSR-GAN) for performing joint face completion and face
super-resolution via multi-task learning. The generator of FCSR-GAN aims to
recover a high-resolution face image without occlusion given an input
low-resolution face image with occlusion. The discriminator of FCSR-GAN uses a
set of carefully designed losses (an adversarial loss, a perceptual loss, a
pixel loss, a smooth loss, a style loss, and a face prior loss) to assure the
high quality of the recovered high-resolution face images without occlusion.
The whole network of FCSR-GAN can be trained end-to-end using our two-stage
training strategy. Experimental results on the public-domain CelebA and Helen
databases show that the proposed approach outperforms the state-of-the-art
methods in jointly performing face super-resolution (up to 8 $\times$) and face
completion, and shows good generalization ability in cross-database testing.
Our FCSR-GAN is also useful for improving face identification performance when
there are low-resolution and occlusion in face images. | [
"cs.CV"
] |
Convolutional Neural Networks have demonstrated superior performance on
single image depth estimation in recent years. These works usually use stacked
spatial pooling or strided convolution to get high-level information which are
common practices in classification task. However, depth estimation is a dense
prediction problem and low-resolution feature maps usually generate blurred
depth map which is undesirable in application. In order to produce high quality
depth map, say clean and accurate, we propose a network consists of a Dense
Feature Extractor (DFE) and a Depth Map Generator (DMG). The DFE combines
ResNet and dilated convolutions. It extracts multi-scale information from input
image while keeping the feature maps dense. As for DMG, we use attention
mechanism to fuse multi-scale features produced in DFE. Our Network is trained
end-to-end and does not need any post-processing. Hence, it runs fast and can
predict depth map in about 15 fps. Experiment results show that our method is
competitive with the state-of-the-art in quantitative evaluation, but can
preserve better structural details of the scene depth. | [
"cs.CV"
] |
Injecting human knowledge is an effective way to accelerate reinforcement
learning (RL). However, these methods are underexplored. This paper presents
our discovery that an abstract forward model (Thought-game (TG)) combined with
transfer learning is an effective way. We take StarCraft II as the study
environment. With the help of a designed TG, the agent can learn a 99\%
win-rate on a 64$\times$64 map against the Level-7 built-in AI, using only 1.08
hours in a single commercial machine. We also show that the TG method is not as
restrictive as it was thought to be. It can work with roughly designed TGs, and
can also be useful when the environment changes. Comparing with previous
model-based RL, we show TG is more effective. We also present a TG hypothesis
that gives the influence of fidelity levels of TG. For real games that have
unequal state and action spaces, we proposed a novel XfrNet of which usefulness
is validated while achieving a 90\% win-rate against the cheating Level-10 AI.
We argue the TG method might shed light on further studies of efficient RL with
human knowledge. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep Q-learning is investigated as an end-to-end solution to estimate the
optimal strategies for acting on time series input. Experiments are conducted
on two idealized trading games. 1) Univariate: the only input is a wave-like
price time series, and 2) Bivariate: the input includes a random stepwise price
time series and a noisy signal time series, which is positively correlated with
future price changes. The Univariate game tests whether the agent can capture
the underlying dynamics, and the Bivariate game tests whether the agent can
utilize the hidden relation among the inputs. Stacked Gated Recurrent Unit
(GRU), Long Short-Term Memory (LSTM) units, Convolutional Neural Network (CNN),
and multi-layer perceptron (MLP) are used to model Q values. For both games,
all agents successfully find a profitable strategy. The GRU-based agents show
best overall performance in the Univariate game, while the MLP-based agents
outperform others in the Bivariate game. | [
"cs.LG",
"stat.ML"
] |
The optimal predictor for a linear dynamical system (with hidden state and
Gaussian noise) takes the form of an autoregressive linear filter, namely the
Kalman filter. However, a fundamental problem in reinforcement learning and
control theory is to make optimal predictions in an unknown dynamical system.
To this end, we take the approach of directly learning an autoregressive filter
for time-series prediction under unknown dynamics. Our analysis differs from
previous statistical analyses in that we regress not only on the inputs to the
dynamical system, but also the outputs, which is essential to dealing with
process noise. The main challenge is to estimate the filter under worst case
input (in $\mathcal H_\infty$ norm), for which we use an $L^\infty$-based
objective rather than ordinary least-squares. For learning an autoregressive
model, our algorithm has optimal sample complexity in terms of the rollout
length, which does not seem to be attained by naive least-squares. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Recent years have shown that deep learned neural networks are a valuable tool
in the field of computer vision. This paper addresses the use of two different
kinds of network architectures, namely LeNet and Network in Network (NiN). They
will be compared in terms of both performance and computational efficiency by
addressing the classification and detection problems. In this paper, multiple
databases will be used to test the networks. One of them contains images
depicting burn wounds from pediatric cases, another one contains an extensive
number of art images and other facial databases were used for facial keypoints
detection. | [
"cs.CV"
] |
Recently, the term explainable AI became known as an approach to produce
models from artificial intelligence which allow interpretation. Since a long
time, there are models of symbolic regression in use that are perfectly
explainable and mathematically tractable: in this contribution we demonstrate
how to use symbolic regression methods to infer the optimal control of a
dynamical system given one or several optimization criteria, or cost functions.
In previous publications, network control was achieved by automatized machine
learning control using genetic programming. Here, we focus on the subsequent
analysis of the analytical expressions which result from the machine learning.
In particular, we use AUTO to analyze the stability properties of the
controlled oscillator system which served as our model. As a result, we show
that there is a considerable advantage of explainable models over less
accessible neural networks. | [
"cs.LG",
"cs.AI",
"nlin.AO",
"physics.data-an"
] |
Artificial Neural Networks are connectionist systems that perform a given
task by learning on examples without having prior knowledge about the task.
This is done by finding an optimal point estimate for the weights in every
node. Generally, the network using point estimates as weights perform well with
large datasets, but they fail to express uncertainty in regions with little or
no data, leading to overconfident decisions.
In this paper, Bayesian Convolutional Neural Network (BayesCNN) using
Variational Inference is proposed, that introduces probability distribution
over the weights. Furthermore, the proposed BayesCNN architecture is applied to
tasks like Image Classification, Image Super-Resolution and Generative
Adversarial Networks. The results are compared to point-estimates based
architectures on MNIST, CIFAR-10 and CIFAR-100 datasets for Image
CLassification task, on BSD300 dataset for Image Super Resolution task and on
CIFAR10 dataset again for Generative Adversarial Network task.
BayesCNN is based on Bayes by Backprop which derives a variational
approximation to the true posterior. We, therefore, introduce the idea of
applying two convolutional operations, one for the mean and one for the
variance. Our proposed method not only achieves performances equivalent to
frequentist inference in identical architectures but also incorporate a
measurement for uncertainties and regularisation. It further eliminates the use
of dropout in the model. Moreover, we predict how certain the model prediction
is based on the epistemic and aleatoric uncertainties and empirically show how
the uncertainty can decrease, allowing the decisions made by the network to
become more deterministic as the training accuracy increases. Finally, we
propose ways to prune the Bayesian architecture and to make it more
computational and time effective. | [
"cs.LG",
"stat.ML"
] |
In this paper, we consider a model called CHARME (Conditional Heteroscedastic
Autoregressive Mixture of Experts), a class of generalized mixture of nonlinear
nonparametric AR-ARCH time series. Under certain Lipschitz-type conditions on
the autoregressive and volatility functions, we prove that this model is
stationary, ergodic and $\tau$-weakly dependent. These conditions are much
weaker than those presented in the literature that treats this model. Moreover,
this result forms the theoretical basis for deriving an asymptotic theory of
the underlying (non)parametric estimation, which we present for this model. As
an application, from the universal approximation property of neural networks
(NN), we develop a learning theory for the NN-based autoregressive functions of
the model, where the strong consistency and asymptotic normality of the
considered estimator of the NN weights and biases are guaranteed under weak
conditions. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] |
Generative adversarial networks (GANs) have shown great success in
applications such as image generation and inpainting. However, they typically
require large datasets, which are often not available, especially in the
context of prediction tasks such as image segmentation that require labels.
Therefore, methods such as the CycleGAN use more easily available unlabelled
data, but do not offer a way to leverage additional labelled data for improved
performance. To address this shortcoming, we show how to factorise the joint
data distribution into a set of lower-dimensional distributions along with
their dependencies. This allows splitting the discriminator in a GAN into
multiple "sub-discriminators" that can be independently trained from incomplete
observations. Their outputs can be combined to estimate the density ratio
between the joint real and the generator distribution, which enables training
generators as in the original GAN framework. We apply our method to image
generation, image segmentation and audio source separation, and obtain improved
performance over a standard GAN when additional incomplete training examples
are available. For the Cityscapes segmentation task in particular, our method
also improves accuracy by an absolute 14.9% over CycleGAN while using only 25
additional paired examples. | [
"cs.LG",
"stat.ML"
] |
Advanced video analytic systems, including scene classification and object
detection, have seen widespread success in various domains such as smart cities
and autonomous transportation. With an ever-growing number of powerful client
devices, there is incentive to move these heavy video analytics workloads from
the cloud to mobile devices to achieve low latency and real-time processing and
to preserve user privacy. However, most video analytic systems are heavyweight
and are trained offline with some pre-defined latency or accuracy requirements.
This makes them unable to adapt at runtime in the face of three types of
dynamism -- the input video characteristics change, the amount of compute
resources available on the node changes due to co-located applications, and the
user's latency-accuracy requirements change. In this paper we introduce
ApproxDet, an adaptive video object detection framework for mobile devices to
meet accuracy-latency requirements in the face of changing content and resource
contention scenarios. To achieve this, we introduce a multi-branch object
detection kernel (layered on Faster R-CNN), which incorporates a data-driven
modeling approach on the performance metrics, and a latency SLA-driven
scheduler to pick the best execution branch at runtime. We couple this kernel
with approximable video object tracking algorithms to create an end-to-end
video object detection system. We evaluate ApproxDet on a large benchmark video
dataset and compare quantitatively to AdaScale and YOLOv3. We find that
ApproxDet is able to adapt to a wide variety of contention and content
characteristics and outshines all baselines, e.g., it achieves 52% lower
latency and 11.1% higher accuracy over YOLOv3. | [
"cs.CV"
] |
We present a new pipeline for holistic 3D scene understanding from a single
image, which could predict object shapes, object poses, and scene layout. As it
is a highly ill-posed problem, existing methods usually suffer from inaccurate
estimation of both shapes and layout especially for the cluttered scene due to
the heavy occlusion between objects. We propose to utilize the latest deep
implicit representation to solve this challenge. We not only propose an
image-based local structured implicit network to improve the object shape
estimation, but also refine the 3D object pose and scene layout via a novel
implicit scene graph neural network that exploits the implicit local object
features. A novel physical violation loss is also proposed to avoid incorrect
context between objects. Extensive experiments demonstrate that our method
outperforms the state-of-the-art methods in terms of object shape, scene layout
estimation, and 3D object detection. | [
"cs.CV"
] |
In this paper we present a novel joint approach for optimising surface
curvature and pose alignment. We present two implementations of this joint
optimisation strategy, including a fast implementation that uses two frames and
an offline multi-frame approach. We demonstrate an order of magnitude
improvement in simulation over state of the art dense relative point-to-plane
Iterative Closest Point (ICP) pose alignment using our dense joint
frame-to-frame approach and show comparable pose drift to dense point-to-plane
ICP bundle adjustment using low-cost depth sensors. Additionally our improved
joint quadric based approach can be used to more accurately estimate surface
curvature on noisy point clouds than previous approaches. | [
"cs.CV"
] |
In electronic trading markets often only the price or volume time series,
that result from interaction of multiple market participants, are directly
observable. In order to test trading strategies before deploying them to
real-time trading, multi-agent market environments calibrated so that the time
series that result from interaction of simulated agents resemble historical are
often used. To ensure adequate testing, one must test trading strategies in a
variety of market scenarios -- which includes both scenarios that represent
ordinary market days as well as stressed markets (most recently observed due to
the beginning of COVID pandemic). In this paper, we address the problem of
multi-agent simulator parameter calibration to allow simulator capture
characteristics of different market regimes. We propose a novel two-step method
to train a discriminator that is able to distinguish between "real" and "fake"
price and volume time series as a part of GAN with self-attention, and then
utilize it within an optimization framework to tune parameters of a simulator
model with known agent archetypes to represent a market scenario. We conclude
with experimental results that demonstrate effectiveness of our method. | [
"cs.LG",
"cs.MA",
"q-fin.TR"
] |
Video captioning, i.e. the task of generating captions from video sequences
creates a bridge between the Natural Language Processing and Computer Vision
domains of computer science. The task of generating a semantically accurate
description of a video is quite complex. Considering the complexity, of the
problem, the results obtained in recent research works are praiseworthy.
However, there is plenty of scope for further investigation. This paper
addresses this scope and proposes a novel solution. Most video captioning
models comprise two sequential/recurrent layers - one as a video-to-context
encoder and the other as a context-to-caption decoder. This paper proposes a
novel architecture, namely Semantically Sensible Video Captioning (SSVC) which
modifies the context generation mechanism by using two novel approaches -
"stacked attention" and "spatial hard pull". As there are no exclusive metrics
for evaluating video captioning models, we emphasize both quantitative and
qualitative analysis of our model. Hence, we have used the BLEU scoring metric
for quantitative analysis and have proposed a human evaluation metric for
qualitative analysis, namely the Semantic Sensibility (SS) scoring metric. SS
Score overcomes the shortcomings of common automated scoring metrics. This
paper reports that the use of the aforementioned novelties improves the
performance of state-of-the-art architectures. | [
"cs.CV"
] |
We propose to compose dynamic tree structures that place the objects in an
image into a visual context, helping visual reasoning tasks such as scene graph
generation and visual Q&A. Our visual context tree model, dubbed VCTree, has
two key advantages over existing structured object representations including
chains and fully-connected graphs: 1) The efficient and expressive binary tree
encodes the inherent parallel/hierarchical relationships among objects, e.g.,
"clothes" and "pants" are usually co-occur and belong to "person"; 2) the
dynamic structure varies from image to image and task to task, allowing more
content-/task-specific message passing among objects. To construct a VCTree, we
design a score function that calculates the task-dependent validity between
each object pair, and the tree is the binary version of the maximum spanning
tree from the score matrix. Then, visual contexts are encoded by bidirectional
TreeLSTM and decoded by task-specific models. We develop a hybrid learning
procedure which integrates end-task supervised learning and the tree structure
reinforcement learning, where the former's evaluation result serves as a
self-critic for the latter's structure exploration. Experimental results on two
benchmarks, which require reasoning over contexts: Visual Genome for scene
graph generation and VQA2.0 for visual Q&A, show that VCTree outperforms
state-of-the-art results while discovering interpretable visual context
structures. | [
"cs.CV"
] |
In unsupervised data generation tasks, besides the generation of a sample
based on previous observations, one would often like to give hints to the model
in order to bias the generation towards desirable metrics. We propose a method
that combines Generative Adversarial Networks (GANs) and reinforcement learning
(RL) in order to accomplish exactly that. While RL biases the data generation
process towards arbitrary metrics, the GAN component of the reward function
ensures that the model still remembers information learned from data. We build
upon previous results that incorporated GANs and RL in order to generate
sequence data and test this model in several settings for the generation of
molecules encoded as text sequences (SMILES) and in the context of music
generation, showing for each case that we can effectively bias the generation
process towards desired metrics. | [
"stat.ML",
"cs.LG"
] |
It is well known that direct training of deep neural networks will generally
lead to poor results. A major progress in recent years is the invention of
various pretraining methods to initialize network parameters and it was shown
that such methods lead to good prediction performance. However, the reason for
the success of pretraining has not been fully understood, although it was
argued that regularization and better optimization play certain roles. This
paper provides another explanation for the effectiveness of pretraining, where
we show pretraining leads to a sparseness of hidden unit activation in the
resulting neural networks. The main reason is that the pretraining models can
be interpreted as an adaptive sparse coding. Compared to deep neural network
with sigmoid function, our experimental results on MNIST and Birdsong further
support this sparseness observation. | [
"cs.LG",
"cs.NE"
] |
In this paper, we introduce a novel interpreting framework that learns an
interpretable model based on an ontology-based sampling technique to explain
agnostic prediction models. Different from existing approaches, our algorithm
considers contextual correlation among words, described in domain knowledge
ontologies, to generate semantic explanations. To narrow down the search space
for explanations, which is a major problem of long and complicated text data,
we design a learnable anchor algorithm, to better extract explanations locally.
A set of regulations is further introduced, regarding combining learned
interpretable representations with anchors to generate comprehensible semantic
explanations. An extensive experiment conducted on two real-world datasets
shows that our approach generates more precise and insightful explanations
compared with baseline approaches. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Consistency training, which exploits both supervised and unsupervised
learning with different augmentations on image, is an effective method of
utilizing unlabeled data in semi-supervised learning (SSL) manner. Here, we
present another version of the method with Grad-CAM consistency loss, so it can
be utilized in training model with better generalization and adjustability. We
show that our method improved the baseline ResNet model with at most 1.44 % and
0.31 $\pm$ 0.59 %p accuracy improvement on average with CIFAR-10 dataset. We
conducted ablation study comparing to using only psuedo-label for consistency
training. Also, we argue that our method can adjust in different environments
when targeted to different units in the model. The code is available:
https://github.com/gimme1dollar/gradcam-consistency-semi-sup. | [
"cs.CV"
] |
Disentangled representation learning has recently attracted a significant
amount of attention, particularly in the field of image representation
learning. However, learning the disentangled representations behind a graph
remains largely unexplored, especially for the attributed graph with both node
and edge features. Disentanglement learning for graph generation has
substantial new challenges including 1) the lack of graph deconvolution
operations to jointly decode node and edge attributes; and 2) the difficulty in
enforcing the disentanglement among latent factors that respectively influence:
i) only nodes, ii) only edges, and iii) joint patterns between them. To address
these challenges, we propose a new disentanglement enhancement framework for
deep generative models for attributed graphs. In particular, a novel
variational objective is proposed to disentangle the above three types of
latent factors, with novel architecture for node and edge deconvolutions.
Moreover, within each type, individual-factor-wise disentanglement is further
enhanced, which is shown to be a generalization of the existing framework for
images. Qualitative and quantitative experiments on both synthetic and
real-world datasets demonstrate the effectiveness of the proposed model and its
extensions. | [
"cs.LG",
"stat.ML"
] |
Detecting objects such as cars and pedestrians in 3D plays an indispensable
role in autonomous driving. Existing approaches largely rely on expensive LiDAR
sensors for accurate depth information. While recently pseudo-LiDAR has been
introduced as a promising alternative, at a much lower cost based solely on
stereo images, there is still a notable performance gap. In this paper we
provide substantial advances to the pseudo-LiDAR framework through improvements
in stereo depth estimation. Concretely, we adapt the stereo network
architecture and loss function to be more aligned with accurate depth
estimation of faraway objects --- currently the primary weakness of
pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely
sparse LiDAR sensors, which alone provide insufficient information for 3D
detection, to de-bias our depth estimation. We propose a depth-propagation
algorithm, guided by the initial depth estimates, to diffuse these few exact
measurements across the entire depth map. We show on the KITTI object detection
benchmark that our combined approach yields substantial improvements in depth
estimation and stereo-based 3D object detection --- outperforming the previous
state-of-the-art detection accuracy for faraway objects by 40%. Our code is
available at https://github.com/mileyan/Pseudo_Lidar_V2. | [
"cs.CV"
] |
Contrastive divergence is a popular method of training energy-based models,
but is known to have difficulties with training stability. We propose an
adaptation to improve contrastive divergence training by scrutinizing a
gradient term that is difficult to calculate and is often left out for
convenience. We show that this gradient term is numerically significant and in
practice is important to avoid training instabilities, while being tractable to
estimate. We further highlight how data augmentation and multi-scale processing
can be used to improve model robustness and generation quality. Finally, we
empirically evaluate stability of model architectures and show improved
performance on a host of benchmarks and use cases,such as image generation, OOD
detection, and compositional generation. | [
"cs.LG"
] |
A hallucination-free and computationally efficient algorithm for enhancing
the resolution of brain MRI images is demonstrated. | [
"cs.CV"
] |
An object detector performs suboptimally when applied to image data taken
from a viewpoint different from the one with which it was trained. In this
paper, we present a viewpoint adaptation algorithm that allows a trained
single-view object detector to be adapted to a new, distinct viewpoint. We
first illustrate how a feature space transformation can be inferred from a
known homography between the source and target viewpoints. Second, we show that
a variety of trained classifiers can be modified to behave as if that
transformation were applied to each testing instance. The proposed algorithm is
evaluated on a person detection task using images from the PETS 2007 and CAVIAR
datasets, as well as from a new synthetic multi-view person detection dataset.
It yields substantial performance improvements when adapting single-view person
detectors to new viewpoints, and simultaneously reduces computational
complexity. This work has the potential to improve detection performance for
cameras viewing objects from arbitrary viewpoints, while simplifying data
collection and feature extraction. | [
"cs.CV"
] |
We explore two techniques which use color to make sense of statistical text
models. One method uses in-text annotations to illustrate a model's view of
particular tokens in particular documents. Another uses a high-level,
"words-as-pixels" graphic to display an entire corpus. Together, these methods
offer both zoomed-in and zoomed-out perspectives into a model's understanding
of text. We show how these interconnected methods help diagnose a classifier's
poor performance on Twitter slang, and make sense of a topic model on
historical political texts. | [
"stat.ML",
"cs.CL",
"cs.LG"
] |
Real-life man-made objects often exhibit strong and easily-identifiable
structure, as a direct result of their design or their intended functionality.
Structure typically appears in the form of individual parts and their
arrangement. Knowing about object structure can be an important cue for object
recognition and scene understanding - a key goal for various AR and robotics
applications. However, commodity RGB-D sensors used in these scenarios only
produce raw, unorganized point clouds, without structural information about the
captured scene. Moreover, the generated data is commonly partial and
susceptible to artifacts and noise, which makes inferring the structure of
scanned objects challenging. In this paper, we organize large shape collections
into parameterized shape templates to capture the underlying structure of the
objects. The templates allow us to transfer the structural information onto new
objects and incomplete scans. We employ a deep neural network that matches the
partial scan with one of the shape templates, then match and fit it to complete
and detailed models from the collection. This allows us to faithfully label its
parts and to guide the reconstruction of the scanned object. We showcase the
effectiveness of our method by comparing it to other state-of-the-art
approaches. | [
"cs.CV"
] |
Single image dehazing is an ill-posed problem that has recently drawn
important attention. Despite the significant increase in interest shown for
dehazing over the past few years, the validation of the dehazing methods
remains largely unsatisfactory, due to the lack of pairs of real hazy and
corresponding haze-free reference images. To address this limitation, we
introduce Dense-Haze - a novel dehazing dataset. Characterized by dense and
homogeneous hazy scenes, Dense-Haze contains 33 pairs of real hazy and
corresponding haze-free images of various outdoor scenes. The hazy scenes have
been recorded by introducing real haze, generated by professional haze
machines. The hazy and haze-free corresponding scenes contain the same visual
content captured under the same illumination parameters. Dense-Haze dataset
aims to push significantly the state-of-the-art in single-image dehazing by
promoting robust methods for real and various hazy scenes. We also provide a
comprehensive qualitative and quantitative evaluation of state-of-the-art
single image dehazing techniques based on the Dense-Haze dataset. Not
surprisingly, our study reveals that the existing dehazing techniques perform
poorly for dense homogeneous hazy scenes and that there is still much room for
improvement. | [
"cs.CV"
] |
Recently, Visual Question Answering (VQA) has emerged as one of the most
significant tasks in multimodal learning as it requires understanding both
visual and textual modalities. Existing methods mainly rely on extracting image
and question features to learn their joint feature embedding via multimodal
fusion or attention mechanism. Some recent studies utilize external
VQA-independent models to detect candidate entities or attributes in images,
which serve as semantic knowledge complementary to the VQA task. However, these
candidate entities or attributes might be unrelated to the VQA task and have
limited semantic capacities. To better utilize semantic knowledge in images, we
propose a novel framework to learn visual relation facts for VQA. Specifically,
we build up a Relation-VQA (R-VQA) dataset based on the Visual Genome dataset
via a semantic similarity module, in which each data consists of an image, a
corresponding question, a correct answer and a supporting relation fact. A
well-defined relation detector is then adopted to predict visual
question-related relation facts. We further propose a multi-step attention
model composed of visual attention and semantic attention sequentially to
extract related visual knowledge and semantic knowledge. We conduct
comprehensive experiments on the two benchmark datasets, demonstrating that our
model achieves state-of-the-art performance and verifying the benefit of
considering visual relation facts. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"cs.MM"
] |
Unsupervised pretraining has recently proven beneficial for computer vision
tasks, including object detection. However, previous self-supervised approaches
are not designed to handle a key aspect of detection: localizing objects. Here,
we present DETReg, an unsupervised pretraining approach for object DEtection
with TRansformers using Region priors. Motivated by the two tasks underlying
object detection: localization and categorization, we combine two complementary
signals for self-supervision. For an object localization signal, we use pseudo
ground truth object bounding boxes from an off-the-shelf unsupervised region
proposal method, Selective Search, which does not require training data and can
detect objects at a high recall rate and very low precision. The categorization
signal comes from an object embedding loss that encourages invariant object
representations, from which the object category can be inferred. We show how to
combine these two signals to train the Deformable DETR detection architecture
from large amounts of unlabeled data. DETReg improves the performance over
competitive baselines and previous self-supervised methods on standard
benchmarks like MS COCO and PASCAL VOC. DETReg also outperforms previous
supervised and unsupervised baseline approaches on low-data regime when trained
with only 1%, 2%, 5%, and 10% of the labeled data on MS COCO. For code and
pretrained models, visit the project page at https://amirbar.net/detreg | [
"cs.CV"
] |
Small satellite constellations provide daily global coverage of the earth's
landmass, but image enrichment relies on automating key tasks like change
detection or feature searches. For example, to extract text annotations from
raw pixels requires two dependent machine learning models, one to analyze the
overhead image and the other to generate a descriptive caption. We evaluate
seven models on the previously largest benchmark for satellite image captions.
We extend the labeled image samples five-fold, then augment, correct and prune
the vocabulary to approach a rough min-max (minimum word, maximum description).
This outcome compares favorably to previous work with large pre-trained image
models but offers a hundred-fold reduction in model size without sacrificing
overall accuracy (when measured with log entropy loss). These smaller models
provide new deployment opportunities, particularly when pushed to edge
processors, on-board satellites, or distributed ground stations. To quantify a
caption's descriptiveness, we introduce a novel multi-class confusion or error
matrix to score both human-labeled test data and never-labeled images that
include bounding box detection but lack full sentence captions. This work
suggests future captioning strategies, particularly ones that can enrich the
class coverage beyond land use applications and that lessen color-centered and
adjacency adjectives ("green", "near", "between", etc.). Many modern language
transformers present novel and exploitable models with world knowledge gleaned
from training from their vast online corpus. One interesting, but easy example
might learn the word association between wind and waves, thus enriching a beach
scene with more than just color descriptions that otherwise might be accessed
from raw pixels without text annotation. | [
"cs.CV",
"cs.CL",
"cs.LG",
"stat.ML"
] |
In this paper, we propose an original object detection methodology applied to
Global Wheat Head Detection (GWHD) Dataset. We have been through two major
architectures of object detection which are FasterRCNN and EfficientDet, in
order to design a novel and robust wheat head detection model. We emphasize on
optimizing the performance of our proposed final architectures. Furthermore, we
have been through an extensive exploratory data analysis and adapted best data
augmentation techniques to our context. We use semi supervised learning to
boost previous supervised models of object detection. Moreover, we put much
effort on ensemble to achieve higher performance. Finally we use specific
post-processing techniques to optimize our wheat head detection results. Our
results have been submitted to solve a research challenge launched on the GWHD
Dataset which is led by nine research institutes from seven countries. Our
proposed method was ranked within the top 6% in the above mentioned challenge. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this work, we present a deep learning-based approach for image tampering
localization fusion. This approach is designed to combine the outcomes of
multiple image forensics algorithms and provides a fused tampering localization
map, which requires no expert knowledge and is easier to interpret by end
users. Our fusion framework includes a set of five individual tampering
localization methods for splicing localization on JPEG images. The proposed
deep learning fusion model is an adapted architecture, initially proposed for
the image restoration task, that performs multiple operations in parallel,
weighted by an attention mechanism to enable the selection of proper operations
depending on the input signals. This weighting process can be very beneficial
for cases where the input signal is very diverse, as in our case where the
output signals of multiple image forensics algorithms are combined. Evaluation
in three publicly available forensics datasets demonstrates that the
performance of the proposed approach is competitive, outperforming the
individual forensics techniques as well as another recently proposed fusion
framework in the majority of cases. | [
"cs.CV"
] |
We consider the problem of knowledge transfer when an agent is facing a
series of Reinforcement Learning (RL) tasks. We introduce a novel metric
between Markov Decision Processes (MDPs) and establish that close MDPs have
close optimal value functions. Formally, the optimal value functions are
Lipschitz continuous with respect to the tasks space. These theoretical results
lead us to a value-transfer method for Lifelong RL, which we use to build a
PAC-MDP algorithm with improved convergence rate. Further, we show the method
to experience no negative transfer with high probability. We illustrate the
benefits of the method in Lifelong RL experiments. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Offline (or batch) reinforcement learning (RL) algorithms seek to learn an
optimal policy from a fixed dataset without active data collection. Based on
the composition of the offline dataset, two main categories of methods are
used: imitation learning which is suitable for expert datasets and vanilla
offline RL which often requires uniform coverage datasets. From a practical
standpoint, datasets often deviate from these two extremes and the exact data
composition is usually unknown a priori. To bridge this gap, we present a new
offline RL framework that smoothly interpolates between the two extremes of
data composition, hence unifying imitation learning and vanilla offline RL. The
new framework is centered around a weak version of the concentrability
coefficient that measures the deviation from the behavior policy to the expert
policy alone.
Under this new framework, we further investigate the question on algorithm
design: can one develop an algorithm that achieves a minimax optimal rate and
also adapts to unknown data composition? To address this question, we consider
a lower confidence bound (LCB) algorithm developed based on pessimism in the
face of uncertainty in offline RL. We study finite-sample properties of LCB as
well as information-theoretic limits in multi-armed bandits, contextual
bandits, and Markov decision processes (MDPs). Our analysis reveals surprising
facts about optimality rates. In particular, in all three settings, LCB
achieves a faster rate of $1/N$ for nearly-expert datasets compared to the
usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the number of samples in
the batch dataset. In the case of contextual bandits with at least two
contexts, we prove that LCB is adaptively optimal for the entire data
composition range, achieving a smooth transition from imitation learning to
offline RL. We further show that LCB is almost adaptively optimal in MDPs. | [
"cs.LG",
"cs.AI",
"math.OC",
"math.ST",
"stat.ML",
"stat.TH"
] |
During the last few years, significant attention has been paid to the
stochastic training of artificial neural networks, which is known as an
effective regularization approach that helps improve the generalization
capability of trained models. In this work, the method of modified equations is
applied to show that the residual network and its variants with noise injection
can be regarded as weak approximations of stochastic differential equations.
Such observations enable us to bridge the stochastic training processes with
the optimal control of backward Kolmogorov's equations. This not only offers a
novel perspective on the effects of regularization from the loss landscape
viewpoint but also sheds light on the design of more reliable and efficient
stochastic training strategies. As an example, we propose a new way to utilize
Bernoulli dropout within the plain residual network architecture and conduct
experiments on a real-world image classification task to substantiate our
theoretical findings. | [
"cs.LG",
"stat.ML",
"49J20, 65C30, 62M45"
] |
Heterogeneous graph representation learning aims to learn low-dimensional
vector representations of different types of entities and relations to empower
downstream tasks. Existing methods either capture semantic relationships but
indirectly leverage node/edge attributes in a complex way, or leverage
node/edge attributes directly without taking semantic relationships into
account. When involving multiple convolution operations, they also have poor
scalability. To overcome these limitations, this paper proposes a flexible and
efficient Graph information propagation Network (GripNet) framework.
Specifically, we introduce a new supergraph data structure consisting of
supervertices and superedges. A supervertex is a semantically-coherent
subgraph. A superedge defines an information propagation path between two
supervertices. GripNet learns new representations for the supervertex of
interest by propagating information along the defined path using multiple
layers. We construct multiple large-scale graphs and evaluate GripNet against
competing methods to show its superiority in link prediction, node
classification, and data integration. | [
"cs.LG"
] |
Surgical tool segmentation in endoscopic images is an important problem: it
is a crucial step towards full instrument pose estimation and it is used for
integration of pre- and intra-operative images into the endoscopic view. While
many recent approaches based on convolutional neural networks have shown great
results, a key barrier to progress lies in the acquisition of a large number of
manually-annotated images which is necessary for an algorithm to generalize and
work well in diverse surgical scenarios. Unlike the surgical image data itself,
annotations are difficult to acquire and may be of variable quality. On the
other hand, synthetic annotations can be automatically generated by using
forward kinematic model of the robot and CAD models of tools by projecting them
onto an image plane. Unfortunately, this model is very inaccurate and cannot be
used for supervised learning of image segmentation models. Since generated
annotations will not directly correspond to endoscopic images due to errors, we
formulate the problem as an unpaired image-to-image translation where the goal
is to learn the mapping between an input endoscopic image and a corresponding
annotation using an adversarial model. Our approach allows to train image
segmentation models without the need to acquire expensive annotations and can
potentially exploit large unlabeled endoscopic image collection outside the
annotated distributions of image/annotation data. We test our proposed method
on Endovis 2017 challenge dataset and show that it is competitive with
supervised segmentation methods. | [
"cs.CV"
] |
Combining Generative Adversarial Networks (GANs) with encoders that learn to
encode data points has shown promising results in learning data representations
in an unsupervised way. We propose a framework that combines an encoder and a
generator to learn disentangled representations which encode meaningful
information about the data distribution without the need for any labels. While
current approaches focus mostly on the generative aspects of GANs, our
framework can be used to perform inference on both real and generated data
points. Experiments on several data sets show that the encoder learns
interpretable, disentangled representations which encode descriptive properties
and can be used to sample images that exhibit specific characteristics. | [
"cs.CV",
"cs.AI",
"cs.NE"
] |
Deep Convolutional Neural Networks (DCNNs) are currently the method of choice
both for generative, as well as for discriminative learning in computer vision
and machine learning. The success of DCNNs can be attributed to the careful
selection of their building blocks (e.g., residual blocks, rectifiers,
sophisticated normalization schemes, to mention but a few). In this paper, we
propose $\Pi$-Nets, a new class of function approximators based on polynomial
expansions. $\Pi$-Nets are polynomial neural networks, i.e., the output is a
high-order polynomial of the input. The unknown parameters, which are naturally
represented by high-order tensors, are estimated through a collective tensor
factorization with factors sharing. We introduce three tensor decompositions
that significantly reduce the number of parameters and show how they can be
efficiently implemented by hierarchical neural networks. We empirically
demonstrate that $\Pi$-Nets are very expressive and they even produce good
results without the use of non-linear activation functions in a large battery
of tasks and signals, i.e., images, graphs, and audio. When used in conjunction
with activation functions, $\Pi$-Nets produce state-of-the-art results in three
challenging tasks, i.e. image generation, face verification and 3D mesh
representation learning. The source code is available at
\url{https://github.com/grigorisg9gr/polynomial_nets}. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Data-driven approaches for edge detection have proven effective and achieve
top results on modern benchmarks. However, all current data-driven edge
detectors require manual supervision for training in the form of hand-labeled
region segments or object boundaries. Specifically, human annotators mark
semantically meaningful edges which are subsequently used for training. Is this
form of strong, high-level supervision actually necessary to learn to
accurately detect edges? In this work we present a simple yet effective
approach for training edge detectors without human supervision. To this end we
utilize motion, and more specifically, the only input to our method is noisy
semi-dense matches between frames. We begin with only a rudimentary knowledge
of edges (in the form of image gradients), and alternate between improving
motion estimation and edge detection in turn. Using a large corpus of video
data, we show that edge detectors trained using our unsupervised scheme
approach the performance of the same methods trained with full supervision
(within 3-5%). Finally, we show that when using a deep network for the edge
detector, our approach provides a novel pre-training scheme for object
detection. | [
"cs.CV"
] |
Inverse reinforcement learning (IRL) infers a reward function from
demonstrations, allowing for policy improvement and generalization. However,
despite much recent interest in IRL, little work has been done to understand
the minimum set of demonstrations needed to teach a specific sequential
decision-making task. We formalize the problem of finding maximally informative
demonstrations for IRL as a machine teaching problem where the goal is to find
the minimum number of demonstrations needed to specify the reward equivalence
class of the demonstrator. We extend previous work on algorithmic teaching for
sequential decision-making tasks by showing a reduction to the set cover
problem which enables an efficient approximation algorithm for determining the
set of maximally-informative demonstrations. We apply our proposed machine
teaching algorithm to two novel applications: providing a lower bound on the
number of queries needed to learn a policy using active IRL and developing a
novel IRL algorithm that can learn more efficiently from informative
demonstrations than a standard IRL approach. | [
"cs.LG",
"stat.ML"
] |
We present two new metrics for evaluating generative models in the
class-conditional image generation setting. These metrics are obtained by
generalizing the two most popular unconditional metrics: the Inception Score
(IS) and the Fre'chet Inception Distance (FID). A theoretical analysis shows
the motivation behind each proposed metric and links the novel metrics to their
unconditional counterparts. The link takes the form of a product in the case of
IS or an upper bound in the FID case. We provide an extensive empirical
evaluation, comparing the metrics to their unconditional variants and to other
metrics, and utilize them to analyze existing generative models, thus providing
additional insights about their performance, from unlearned classes to mode
collapse. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
The Arcade Learning Environment (ALE) is a popular platform for evaluating
reinforcement learning agents. Much of the appeal comes from the fact that
Atari games demonstrate aspects of competency we expect from an intelligent
agent and are not biased toward any particular solution approach. The challenge
of the ALE includes (1) the representation learning problem of extracting
pertinent information from raw pixels, and (2) the behavioural learning problem
of leveraging complex, delayed associations between actions and rewards. Often,
the research questions we are interested in pertain more to the latter, but the
representation learning problem adds significant computational expense. We
introduce MinAtar, short for miniature Atari, a new set of environments that
capture the general mechanics of specific Atari games while simplifying the
representational complexity to focus more on the behavioural challenges.
MinAtar consists of analogues of five Atari games: Seaquest, Breakout, Asterix,
Freeway and Space Invaders. Each MinAtar environment provides the agent with a
10x10xn binary state representation. Each game plays out on a 10x10 grid with n
channels corresponding to game-specific objects, such as ball, paddle and brick
in the game Breakout. To investigate the behavioural challenges posed by
MinAtar, we evaluated a smaller version of the DQN architecture as well as
online actor-critic with eligibility traces. With the representation learning
problem simplified, we can perform experiments with significantly less
computational expense. In our experiments, we use the saved compute time to
perform step-size parameter sweeps and more runs than is typical for the ALE.
Experiments like this improve reproducibility, and allow us to draw more
confident conclusions. We hope that MinAtar can allow researchers to thoroughly
investigate behavioural challenges similar to those inherent in the ALE. | [
"cs.LG",
"cs.AI"
] |
Engineering simulations for analysis of structural and fluid systems require
information of contacts between various 3-D surfaces of the geometry to
accurately model the physics between them. In machine learning applications,
3-D surfaces are most suitably represented with point clouds or meshes and
learning representations of interacting geometries form point-based
representations is challenging. The objective of this work is to introduce a
machine learning algorithm, ActivationNet, that can learn from point clouds or
meshes of interacting 3-D surfaces and predict the quality of contact between
these surfaces. The ActivationNet generates activation states from point-based
representation of surfaces using a multi-dimensional binning approach. The
activation states are further used to contact quality between surfaces using
deep neural networks. The performance of our model is demonstrated using
several experiments, including tests on interacting surfaces extracted from
engineering geometries. In all the experiments presented in this paper, the
contact quality predictions of ActivationNet agree well with the expectations. | [
"cs.LG"
] |
Smart contracts hold digital coins worth billions of dollars, their security
issues have drawn extensive attention in the past years. Towards smart contract
vulnerability detection, conventional methods heavily rely on fixed expert
rules, leading to low accuracy and poor scalability. Recent deep learning
approaches alleviate this issue but fail to encode useful expert knowledge. In
this paper, we explore combining deep learning with expert patterns in an
explainable fashion. Specifically, we develop automatic tools to extract expert
patterns from the source code. We then cast the code into a semantic graph to
extract deep graph features. Thereafter, the global graph feature and local
expert patterns are fused to cooperate and approach the final prediction, while
yielding their interpretable weights. Experiments are conducted on all
available smart contracts with source code in two platforms, Ethereum and VNT
Chain. Empirically, our system significantly outperforms state-of-the-art
methods. Our code is released. | [
"cs.LG",
"cs.PL"
] |
Siamese network based trackers formulate the visual tracking task as a
similarity matching problem. Almost all popular Siamese trackers realize the
similarity learning via convolutional feature cross-correlation between a
target branch and a search branch. However, since the size of target feature
region needs to be pre-fixed, these cross-correlation base methods suffer from
either reserving much adverse background information or missing a great deal of
foreground information. Moreover, the global matching between the target and
search region also largely neglects the target structure and part-level
information.
In this paper, to solve the above issues, we propose a simple target-aware
Siamese graph attention network for general object tracking. We propose to
establish part-to-part correspondence between the target and the search region
with a complete bipartite graph, and apply the graph attention mechanism to
propagate target information from the template feature to the search feature.
Further, instead of using the pre-fixed region cropping for
template-feature-area selection, we investigate a target-aware area selection
mechanism to fit the size and aspect ratio variations of different objects.
Experiments on challenging benchmarks including GOT-10k, UAV123, OTB-100 and
LaSOT demonstrate that the proposed SiamGAT outperforms many state-of-the-art
trackers and achieves leading performance. Code is available at:
https://git.io/SiamGAT | [
"cs.CV"
] |
Few-shot object detection (FSOD) aims at learning a detector that can fast
adapt to previously unseen objects with scarce annotated examples, which is
challenging and demanding. Existing methods solve this problem by performing
subtasks of classification and localization utilizing a shared component (e.g.,
RoI head) in the detector, yet few of them take the distinct preferences of two
subtasks towards feature embedding into consideration. In this paper, we
carefully analyze the characteristics of FSOD, and present that a general
few-shot detector should consider the explicit decomposition of two subtasks,
as well as leveraging information from both of them to enhance feature
representations. To the end, we propose a simple yet effective Adaptive
Fully-Dual Network (AFD-Net). Specifically, we extend Faster R-CNN by
introducing Dual Query Encoder and Dual Attention Generator for separate
feature extraction, and Dual Aggregator for separate model reweighting.
Spontaneously, separate state estimation is achieved by the R-CNN detector.
Besides, for the acquisition of enhanced feature representations, we further
introduce Adaptive Fusion Mechanism to adaptively perform feature fusion in
different subtasks. Extensive experiments on PASCAL VOC and MS COCO in various
settings show that, our method achieves new state-of-the-art performance by a
large margin, demonstrating its effectiveness and generalization ability. | [
"cs.CV"
] |
We describe the multi-GPU gradient boosting algorithm implemented in the
XGBoost library (https://github.com/dmlc/xgboost). Our algorithm allows fast,
scalable training on multi-GPU systems with all of the features of the XGBoost
library. We employ data compression techniques to minimise the usage of scarce
GPU memory while still allowing highly efficient implementation. Using our
algorithm we show that it is possible to process 115 million training instances
in under three minutes on a publicly available cloud computing instance. The
algorithm is implemented using end-to-end GPU parallelism, with prediction,
gradient calculation, feature quantisation, decision tree construction and
evaluation phases all computed on device. | [
"cs.LG",
"stat.ML"
] |
Hierarchical Reinforcement Learning (HRL) is a promising approach to solving
long-horizon problems with sparse and delayed rewards. Many existing HRL
algorithms either use pre-trained low-level skills that are unadaptable, or
require domain-specific information to define low-level rewards. In this paper,
we aim to adapt low-level skills to downstream tasks while maintaining the
generality of reward design. We propose an HRL framework which sets auxiliary
rewards for low-level skill training based on the advantage function of the
high-level policy. This auxiliary reward enables efficient, simultaneous
learning of the high-level policy and low-level skills without using
task-specific knowledge. In addition, we also theoretically prove that
optimizing low-level skills with this auxiliary reward will increase the task
return for the joint policy. Experimental results show that our algorithm
dramatically outperforms other state-of-the-art HRL methods in Mujoco domains.
We also find both low-level and high-level policies trained by our algorithm
transferable. | [
"cs.LG",
"cs.AI"
] |
Superpixel-based Higher-order Conditional random fields (SP-HO-CRFs) are
known for their effectiveness in enforcing both short and long spatial
contiguity for pixelwise labelling in computer vision. However, their
higher-order potentials are usually too complex to learn and often incur a high
computational cost in performing inference. We propose an new approximation
approach to SP-HO-CRFs that resolves these problems. Our approach is a
multi-layer CRF framework that inherits the simplicity from pairwise CRFs by
formulating both the higher-order and pairwise cues into the same pairwise
potentials in the first layer. Essentially, this approach provides accuracy
enhancement on the basis of pairwise CRFs without training by reusing their
pre-trained parameters and/or weights. The proposed multi-layer approach
performs especially well in delineating the boundary details (boarders) of
object categories such as "trees" and "bushes". Multiple sets of experiments
conducted on dataset MSRC-21 and PASCAL VOC 2012 validate the effectiveness and
efficiency of the proposed methods. | [
"cs.CV"
] |
Structural features are important features in a geometrical graph. Although
there are some correlation analysis of features based on covariance, there is
no relevant research on structural feature correlation analysis with graph
neural networks. In this paper, we introuduce graph feature to feature
(Fea2Fea) prediction pipelines in a low dimensional space to explore some
preliminary results on structural feature correlation, which is based on graph
neural network. The results show that there exists high correlation between
some of the structural features. An irredundant feature combination with
initial node features, which is filtered by graph neural network has improved
its classification accuracy in some graph-based tasks. We compare differences
between concatenation methods on connecting embeddings between features and
show that the simplest is the best. We generalize on the synthetic geometric
graphs and certify the results on prediction difficulty between structural
features. | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
In this work we propose 3D-FFS, a novel approach to make sensor fusion based
3D object detection networks significantly faster using a class of
computationally inexpensive heuristics. Existing sensor fusion based networks
generate 3D region proposals by leveraging inferences from 2D object detectors.
However, as images have no depth information, these networks rely on extracting
semantic features of points from the entire scene to locate the object. By
leveraging aggregated intrinsic properties (e.g. point density) of the 3D point
cloud data, 3D-FFS can substantially constrain the 3D search space and thereby
significantly reduce training time, inference time and memory consumption
without sacrificing accuracy. To demonstrate the efficacy of 3D-FFS, we have
integrated it with Frustum ConvNet (F-ConvNet), a prominent sensor fusion based
3D object detection model. We assess the performance of 3D-FFS on the KITTI
dataset. Compared to F-ConvNet, we achieve improvements in training and
inference times by up to 62.84% and 56.46%, respectively, while reducing the
memory usage by up to 58.53%. Additionally, we achieve 0.59%, 2.03% and 3.34%
improvements in accuracy for the Car, Pedestrian and Cyclist classes,
respectively. 3D-FFS shows a lot of promise in domains with limited computing
power, such as autonomous vehicles, drones and robotics where LiDAR-Camera
based sensor fusion perception systems are widely used. | [
"cs.CV"
] |
Surface defect detection plays an increasingly important role in
manufacturing industry to guarantee the product quality. Many deep learning
methods have been widely used in surface defect detection tasks, and have been
proven to perform well in defects classification and location. However, deep
learning-based detection methods often require plenty of data for training,
which fail to apply to the real industrial scenarios since the distribution of
defect categories is often imbalanced. In other words, common defect classes
have many samples but rare defect classes have extremely few samples, and it is
difficult for these methods to well detect rare defect classes. To solve the
imbalanced distribution problem, in this paper we propose TL-SDD: a novel
Transfer Learning-based method for Surface Defect Detection. First, we adopt a
two-phase training scheme to transfer the knowledge from common defect classes
to rare defect classes. Second, we propose a novel Metric-based Surface Defect
Detection (M-SDD) model. We design three modules for this model: (1) feature
extraction module: containing feature fusion which combines high-level semantic
information with low-level structural information. (2) feature reweighting
module: transforming examples to a reweighting vector that indicates the
importance of features. (3) distance metric module: learning a metric space in
which defects are classified by computing distances to representations of each
category. Finally, we validate the performance of our proposed method on a real
dataset including surface defects of aluminum profiles. Compared to the
baseline methods, the performance of our proposed method has improved by up to
11.98% for rare defect classes. | [
"cs.CV",
"cs.AI"
] |
We consider a general class of non-linear Bellman equations. These open up a
design space of algorithms that have interesting properties, which has two
potential advantages. First, we can perhaps better model natural phenomena. For
instance, hyperbolic discounting has been proposed as a mathematical model that
matches human and animal data well, and can therefore be used to explain
preference orderings. We present a different mathematical model that matches
the same data, but that makes very different predictions under other
circumstances. Second, the larger design space can perhaps lead to algorithms
that perform better, similar to how discount factors are often used in practice
even when the true objective is undiscounted. We show that many of the
resulting Bellman operators still converge to a fixed point, and therefore that
the resulting algorithms are reasonable and inherit many beneficial properties
of their linear counterparts. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Simple convolutional neural network was able to win ISISPA color constancy
competition. Partial reimplementation of (Bianco, 2017) neural architecture
would have shown even better results in this setup. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Image generation from scene description is a cornerstone technique for the
controlled generation, which is beneficial to applications such as content
creation and image editing. In this work, we aim to synthesize images from
scene description with retrieved patches as reference. We propose a
differentiable retrieval module. With the differentiable retrieval module, we
can (1) make the entire pipeline end-to-end trainable, enabling the learning of
better feature embedding for retrieval; (2) encourage the selection of mutually
compatible patches with additional objective functions. We conduct extensive
quantitative and qualitative experiments to demonstrate that the proposed
method can generate realistic and diverse images, where the retrieved patches
are reasonable and mutually compatible. | [
"cs.CV"
] |