text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Face clustering is a promising method for annotating unlabeled face images.
Recent supervised approaches have boosted the face clustering accuracy greatly,
however their performance is still far from satisfactory. These methods can be
roughly divided into global-based and local-based ones. Global-based methods
suffer from the limitation of training data scale, while local-based ones are
difficult to grasp the whole graph structure information and usually take a
long time for inference. Previous approaches fail to tackle these two
challenges simultaneously. To address the dilemma of large-scale training and
efficient inference, we propose the STructure-AwaRe Face Clustering (STAR-FC)
method. Specifically, we design a structure-preserved subgraph sampling
strategy to explore the power of large-scale training data, which can increase
the training data scale from ${10^{5}}$ to ${10^{7}}$. During inference, the
STAR-FC performs efficient full-graph clustering with two steps: graph parsing
and graph refinement. And the concept of node intimacy is introduced in the
second step to mine the local structural information. The STAR-FC gets 91.97
pairwise F-score on partial MS1M within 310s which surpasses the
state-of-the-arts. Furthermore, we are the first to train on very large-scale
graph with 20M nodes, and achieve superior inference results on 12M testing
data. Overall, as a simple and effective method, the proposed STAR-FC provides
a strong baseline for large-scale face clustering. Code is available at
\url{https://sstzal.github.io/STAR-FC/}. | [
"cs.CV"
] |
This paper proposes a novel deep reinforcement learning architecture that was
inspired by previous tree structured architectures which were only useable in
discrete action spaces. Policy Prediction Network offers a way to improve
sample complexity and performance on continuous control problems in exchange
for extra computation at training time but at no cost in computation at rollout
time. Our approach integrates a mix between model-free and model-based
reinforcement learning. Policy Prediction Network is the first to introduce
implicit model-based learning to Policy Gradient algorithms for continuous
action space and is made possible via the empirically justified clipping
scheme. Our experiments are focused on the MuJoCo environments so that they can
be compared with similar work done in this area. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep image translation methods have recently shown excellent results,
outputting high-quality images covering multiple modes of the data
distribution. There has also been increased interest in disentangling the
internal representations learned by deep methods to further improve their
performance and achieve a finer control. In this paper, we bridge these two
objectives and introduce the concept of cross-domain disentanglement. We aim to
separate the internal representation into three parts. The shared part contains
information for both domains. The exclusive parts, on the other hand, contain
only factors of variation that are particular to each domain. We achieve this
through bidirectional image translation based on Generative Adversarial
Networks and cross-domain autoencoders, a novel network component. Our model
offers multiple advantages. We can output diverse samples covering multiple
modes of the distributions of both domains, perform domain-specific image
transfer and interpolation, and cross-domain retrieval without the need of
labeled data, only paired images. We compare our model to the state-of-the-art
in multi-modal image translation and achieve better results for translation on
challenging datasets as well as for cross-domain retrieval on realistic
datasets. | [
"cs.CV"
] |
Networks are powerful data structures, but are challenging to work with for
conventional machine learning methods. Network Embedding (NE) methods attempt
to resolve this by learning vector representations for the nodes, for
subsequent use in downstream machine learning tasks.
Link Prediction (LP) is one such downstream machine learning task that is an
important use case and popular benchmark for NE methods. Unfortunately, while
NE methods perform exceedingly well at this task, they are lacking in
transparency as compared to simpler LP approaches.
We introduce ExplaiNE, an approach to offer counterfactual explanations for
NE-based LP methods, by identifying existing links in the network that explain
the predicted links. ExplaiNE is applicable to a broad class of NE algorithms.
An extensive empirical evaluation for the NE method `Conditional Network
Embedding' in particular demonstrates its accuracy and scalability. | [
"cs.LG",
"stat.ML"
] |
Addressing shifts in data distributions is an important prerequisite for the
deployment of deep learning models to real-world settings. A general approach
to this problem involves the adjustment of models to a new domain through
transfer learning. However, in many cases, this is not applicable in a post-hoc
manner to deployed models and further parameter adjustments jeopardize safety
certifications that were established beforehand. In such a context, we propose
to deal with changes in the data distribution via guided data homogenization
which shifts the burden of adaptation from the model to the data. This approach
makes use of information about the training data contained implicitly in the
deep learning model to learn a domain transfer function. This allows for a
targeted deployment of models to unknown scenarios without changing the model
itself. We demonstrate the potential of data homogenization through experiments
on the CIFAR-10 and MNIST data sets. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Deep learning (DL) techniques have achieved great success in predictive
accuracy in a variety of tasks, but deep neural networks (DNNs) are shown to
produce highly overconfident scores for even abnormal samples. Well-defined
uncertainty indicates whether a model's output should (or should not) be
trusted and thus becomes critical in real-world scenarios which typically
involves shifted input distributions due to many factors. Existing uncertainty
approaches assume that testing samples from a different data distribution would
induce unreliable model predictions thus have higher uncertainty scores. They
quantify model uncertainty by calibrating DL model's confidence of a given
input and evaluate the effectiveness in computer vision (CV) and natural
language processing (NLP)-related tasks. However, their methodologies'
reliability may be compromised under programming tasks due to difference in
data representations and shift patterns. In this paper, we first define three
different types of distribution shift in program data and build a large-scale
shifted Java dataset. We implement two common programming language tasks on our
dataset to study the effect of each distribution shift on DL model performance.
We also propose a large-scale benchmark of existing state-of-the-art predictive
uncertainty on programming tasks and investigate their effectiveness under data
distribution shift. Experiments show that program distribution shift does
degrade the DL model performance to varying degrees and that existing
uncertainty methods all present certain limitations in quantifying uncertainty
on program dataset. | [
"cs.LG",
"cs.SE",
"68T37",
"I.2.5; G.4"
] |
We consider the problem of learning a nonlinear function over a network of
learners in a fully decentralized fashion. Online learning is additionally
assumed, where every learner receives continuous streaming data locally. This
learning model is called a fully distributed online learning (or a fully
decentralized online federated learning). For this model, we propose a novel
learning framework with multiple kernels, which is named DOMKL. The proposed
DOMKL is devised by harnessing the principles of an online alternating
direction method of multipliers and a distributed Hedge algorithm. We
theoretically prove that DOMKL over T time slots can achieve an optimal
sublinear regret, implying that every learner in the network can learn a common
function which has a diminishing gap from the best function in hindsight. Our
analysis also reveals that DOMKL yields the same asymptotic performance of the
state-of-the-art centralized approach while keeping local data at edge
learners. Via numerical tests with real datasets, we demonstrate the
effectiveness of the proposed DOMKL on various online regression and
time-series prediction tasks. | [
"cs.LG"
] |
Most recent successes on forecasting the people motion are based on LSTM
models and all most recent progress has been achieved by modelling the social
interaction among people and the people interaction with the scene. We question
the use of the LSTM models and propose the novel use of Transformer Networks
for trajectory forecasting. This is a fundamental switch from the sequential
step-by-step processing of LSTMs to the only-attention-based memory mechanisms
of Transformers. In particular, we consider both the original Transformer
Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art
on all natural language processing tasks. Our proposed Transformers predict the
trajectories of the individual people in the scene. These are "simple" model
because each person is modelled separately without any complex human-human nor
scene interaction terms. In particular, the TF model without bells and whistles
yields the best score on the largest and most challenging trajectory
forecasting benchmark of TrajNet. Additionally, its extension which predicts
multiple plausible future trajectories performs on par with more engineered
techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers
may deal with missing observations, as it may be the case with real sensor
data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer. | [
"cs.CV"
] |
Vertical federated learning (VFL) attracts increasing attention due to the
emerging demands of multi-party collaborative modeling and concerns of privacy
leakage. In the real VFL applications, usually only one or partial parties hold
labels, which makes it challenging for all parties to collaboratively learn the
model without privacy leakage. Meanwhile, most existing VFL algorithms are
trapped in the synchronous computations, which leads to inefficiency in their
real-world applications. To address these challenging problems, we propose a
novel {\bf VF}L framework integrated with new {\bf b}ackward updating mechanism
and {\bf b}ilevel asynchronous parallel architecture (VF{${\textbf{B}}^2$}),
under which three new algorithms, including VF{${\textbf{B}}^2$}-SGD, -SVRG,
and -SAGA, are proposed. We derive the theoretical results of the convergence
rates of these three algorithms under both strongly convex and nonconvex
conditions. We also prove the security of VF{${\textbf{B}}^2$} under
semi-honest threat models. Extensive experiments on benchmark datasets
demonstrate that our algorithms are efficient, scalable and lossless. | [
"cs.LG"
] |
3D Convolutional Neural Network (3D CNN) captures spatial and temporal
information on 3D data such as video sequences. However, due to the convolution
and pooling mechanism, the information loss seems unavoidable. To improve the
visual explanations and classification in 3D CNN, we propose two approaches; i)
aggregate layer-wise global to local (global-local) discrete gradients using
trained 3DResNext network, and ii) implement attention gating network to
improve the accuracy of the action recognition. The proposed approach intends
to show the usefulness of every layer termed as global-local attention in 3D
CNN via visual attribution, weakly-supervised action localization, and action
recognition. Firstly, the 3DResNext is trained and applied for action
classification using backpropagation concerning the maximum predicted class.
The gradients and activations of every layer are then up-sampled. Later,
aggregation is used to produce more nuanced attention, which points out the
most critical part of the predicted class's input videos. We use contour
thresholding of final attention for final localization. We evaluate spatial and
temporal action localization in trimmed videos using fine-grained visual
explanation via 3DCam. Experimental results show that the proposed approach
produces informative visual explanations and discriminative attention.
Furthermore, the action recognition via attention gating on each layer produces
better classification results than the baseline model. | [
"cs.CV",
"cs.AI",
"cs.NE"
] |
The large pose discrepancy between two face images is one of the fundamental
challenges in automatic face recognition. Conventional approaches to
pose-invariant face recognition either perform face frontalization on, or learn
a pose-invariant representation from, a non-frontal face image. We argue that
it is more desirable to perform both tasks jointly to allow them to leverage
each other. To this end, this paper proposes a Disentangled Representation
learning-Generative Adversarial Network (DR-GAN) with three distinct novelties.
First, the encoder-decoder structure of the generator enables DR-GAN to learn a
representation that is both generative and discriminative, which can be used
for face image synthesis and pose-invariant face recognition. Second, this
representation is explicitly disentangled from other face variations such as
pose, through the pose code provided to the decoder and pose estimation in the
discriminator. Third, DR-GAN can take one or multiple images as the input, and
generate one unified identity representation along with an arbitrary number of
synthetic face images. Extensive quantitative and qualitative evaluation on a
number of controlled and in-the-wild databases demonstrate the superiority of
DR-GAN over the state of the art in both learning representations and rotating
large-pose face images. | [
"cs.CV"
] |
Table structure recognition is an essential part for making machines
understand tables. Its main task is to recognize the internal structure of a
table. However, due to the complexity and diversity in their structure and
style, it is very difficult to parse the tabular data into the structured
format which machines can understand easily, especially for complex tables. In
this paper, we introduce Split, Embed and Merge (SEM), an accurate table
structure recognizer. Our model takes table images as input and can correctly
recognize the structure of tables, whether they are simple or a complex tables.
SEM is mainly composed of three parts, splitter, embedder and merger. In the
first stage, we apply the splitter to predict the potential regions of the
table row (column) separators, and obtain the fine grid structure of the table.
In the second stage, by taking a full consideration of the textual information
in the table, we fuse the output features for each table grid from both vision
and language modalities. Moreover, we achieve a higher precision in our
experiments through adding additional semantic features. Finally, we process
the merging of these basic table grids in a self-regression manner. The
correspondent merging results is learned through the attention mechanism. In
our experiments, SEM achieves an average F1-Measure of 97.11% on the SciTSR
dataset which outperforms other methods by a large margin. We also won the
first place in the complex table and third place in all tables in ICDAR 2021
Competition on Scientific Literature Parsing, Task-B. Extensive experiments on
other publicly available datasets demonstrate that our model achieves
state-of-the-art. | [
"cs.CV"
] |
We introduce 'semi-unsupervised learning', a problem regime related to
transfer learning and zero-shot learning where, in the training data, some
classes are sparsely labelled and others entirely unlabelled. Models able to
learn from training data of this type are potentially of great use as many
real-world datasets are like this. Here we demonstrate a new deep generative
model for classification in this regime. Our model, a Gaussian mixture deep
generative model, demonstrates superior semi-unsupervised classification
performance on MNIST to model M2 from Kingma and Welling (2014). We apply the
model to human accelerometer data, performing activity classification and
structure discovery on windows of time series data. | [
"stat.ML",
"cs.LG"
] |
Because of the invisible human keypoints in images caused by illumination,
occlusion and overlap, it is likely to produce unreasonable human pose
prediction for most of the current human pose estimation methods. In this
paper, we design a novel generative adversarial network (GAN) to improve the
localization accuracy of visible joints when some joints are invisible. The
network consists of two simple but efficient modules, Cascade Feature Network
(CFN) and Graph Structure Network (GSN). First, the CFN utilizes the prediction
maps from the previous stages to guide the prediction maps in the next stage to
produce accurate human pose. Second, the GSN is designed to contribute to the
localization of invisible joints by passing message among different joints.
According to GAN, if the prediction pose produced by the generator G cannot be
distinguished by the discriminator D, the generator network G has successfully
obtained the underlying dependence of human joints. We conduct experiments on
three widely used human pose estimation benchmark datasets, LSP, MPII and COCO,
whose results show the effectiveness of our proposed framework. | [
"cs.CV"
] |
Describing the color and textural information of a person image is one of the
most crucial aspects of person re-identification (re-id). In this paper, we
present novel meta-descriptors based on a hierarchical distribution of pixel
features. Although hierarchical covariance descriptors have been successfully
applied to image classification, the mean information of pixel features, which
is absent from the covariance, tends to be the major discriminative information
for person re-id. To solve this problem, we describe a local region in an image
via hierarchical Gaussian distribution in which both means and covariances are
included in their parameters. More specifically, the region is modeled as a set
of multiple Gaussian distributions in which each Gaussian represents the
appearance of a local patch. The characteristics of the set of Gaussians are
again described by another Gaussian distribution. In both steps, we embed the
parameters of the Gaussian into a point of Symmetric Positive Definite (SPD)
matrix manifold. By changing the way to handle mean information in this
embedding, we develop two hierarchical Gaussian descriptors. Additionally, we
develop feature norm normalization methods with the ability to alleviate the
biased trends that exist on the descriptors. The experimental results conducted
on five public datasets indicate that the proposed descriptors achieve
remarkably high performance on person re-id. | [
"cs.CV"
] |
A crucial component of an autonomous vehicle (AV) is the artificial
intelligence (AI) is able to drive towards a desired destination. Today, there
are different paradigms addressing the development of AI drivers. On the one
hand, we find modular pipelines, which divide the driving task into sub-tasks
such as perception and maneuver planning and control. On the other hand, we
find end-to-end driving approaches that try to learn a direct mapping from
input raw sensor data to vehicle control signals. The later are relatively less
studied, but are gaining popularity since they are less demanding in terms of
sensor data annotation. This paper focuses on end-to-end autonomous driving. So
far, most proposals relying on this paradigm assume RGB images as input sensor
data. However, AVs will not be equipped only with cameras, but also with active
sensors providing accurate depth information (e.g., LiDARs). Accordingly, this
paper analyses whether combining RGB and depth modalities, i.e. using RGBD
data, produces better end-to-end AI drivers than relying on a single modality.
We consider multimodality based on early, mid and late fusion schemes, both in
multisensory and single-sensor (monocular depth estimation) settings. Using the
CARLA simulator and conditional imitation learning (CIL), we show how, indeed,
early fusion multimodality outperforms single-modality. | [
"cs.CV"
] |
Generative Adversarial Networks (GAN) boast impressive capacity to generate
realistic images. However, like much of the field of deep learning, they
require an inordinate amount of data to produce results, thereby limiting their
usefulness in generating novelty. In the same vein, recent advances in
meta-learning have opened the door to many few-shot learning applications. In
the present work, we propose Few-shot Image Generation using Reptile (FIGR), a
GAN meta-trained with Reptile. Our model successfully generates novel images on
both MNIST and Omniglot with as little as 4 images from an unseen class. We
further contribute FIGR-8, a new dataset for few-shot image generation, which
contains 1,548,944 icons categorized in over 18,409 classes. Trained on FIGR-8,
initial results show that our model can generalize to more advanced concepts
(such as "bird" and "knife") from as few as 8 samples from a previously unseen
class of images and as little as 10 training steps through those 8 images. This
work demonstrates the potential of training a GAN for few-shot image generation
and aims to set a new benchmark for future work in the domain. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Studies of object detection and localization, particularly pedestrian
detection have received considerable attention in recent times due to its
several prospective applications such as surveillance, driving assistance,
autonomous cars, etc. Also, a significant trend of latest research studies in
related problem areas is the use of sophisticated Deep Learning based
approaches to improve the benchmark performance on various standard datasets. A
trade-off between the speed (number of video frames processed per second) and
detection accuracy has often been reported in the existing literature. In this
article, we present a new but simple deep learning based strategy for
pedestrian detection that improves this trade-off. Since training of similar
models using publicly available sample datasets failed to improve the detection
performance to some significant extent, particularly for the instances of
pedestrians of smaller sizes, we have developed a new sample dataset consisting
of more than 80K annotated pedestrian figures in videos recorded under varying
traffic conditions. Performance of the proposed model on the test samples of
the new dataset and two other existing datasets, namely Caltech Pedestrian
Dataset (CPD) and CityPerson Dataset (CD) have been obtained. Our proposed
system shows nearly 16\% improvement over the existing state-of-the-art result. | [
"cs.CV"
] |
In recent years, deep learning has shown performance breakthroughs in many
applications, such as image detection, image segmentation, pose estimation, and
speech recognition. However, this comes with a major concern: deep networks
have been found to be vulnerable to adversarial examples. Adversarial examples
are slightly modified inputs that are intentionally designed to cause a
misclassification by the model. In the domains of images and speech, the
modifications are so small that they are not seen or heard by humans, but
nevertheless greatly affect the classification of the model.
Deep learning models have been successfully applied to malware detection. In
this domain, generating adversarial examples is not straightforward, as small
modifications to the bytes of the file could lead to significant changes in its
functionality and validity. We introduce a novel loss function for generating
adversarial examples specifically tailored for discrete input sets, such as
executable bytes. We modify malicious binaries so that they would be detected
as benign, while preserving their original functionality, by injecting a small
sequence of bytes (payload) in the binary file. We applied this approach to an
end-to-end convolutional deep learning malware detection model and show a high
rate of detection evasion. Moreover, we show that our generated payload is
robust enough to be transferable within different locations of the same file
and across different files, and that its entropy is low and similar to that of
benign data sections. | [
"cs.LG",
"cs.CR"
] |
Understanding and explaining deep learning models is an imperative task.
Towards this, we propose a method that obtains gradient-based certainty
estimates that also provide visual attention maps. Particularly, we solve for
visual question answering task. We incorporate modern probabilistic deep
learning methods that we further improve by using the gradients for these
estimates. These have two-fold benefits: a) improvement in obtaining the
certainty estimates that correlate better with misclassified samples and b)
improved attention maps that provide state-of-the-art results in terms of
correlation with human attention regions. The improved attention maps result in
consistent improvement for various methods for visual question answering.
Therefore, the proposed technique can be thought of as a recipe for obtaining
improved certainty estimates and explanation for deep learning models. We
provide detailed empirical analysis for the visual question answering task on
all standard benchmarks and comparison with state of the art methods. | [
"cs.CV",
"cs.CL",
"cs.LG",
"eess.IV"
] |
Most deep learning based image inpainting approaches adopt autoencoder or its
variants to fill missing regions in images. Encoders are usually utilized to
learn powerful representational spaces, which are important for dealing with
sophisticated learning tasks. Specifically, in image inpainting tasks, masks
with any shapes can appear anywhere in images (i.e., free-form masks) which
form complex patterns. It is difficult for encoders to capture such powerful
representations under this complex situation. To tackle this problem, we
propose a self-supervised Siamese inference network to improve the robustness
and generalization. It can encode contextual semantics from full resolution
images and obtain more discriminative representations. we further propose a
multi-scale decoder with a novel dual attention fusion module (DAF), which can
combine both the restored and known regions in a smooth way. This multi-scale
architecture is beneficial for decoding discriminative representations learned
by encoders into images layer by layer. In this way, unknown regions will be
filled naturally from outside to inside. Qualitative and quantitative
experiments on multiple datasets, including facial and natural datasets (i.e.,
Celeb-HQ, Pairs Street View, Places2 and ImageNet), demonstrate that our
proposed method outperforms state-of-the-art methods in generating high-quality
inpainting results. | [
"cs.CV"
] |
Vision-and-Language (VL) pre-training has shown great potential on many
related downstream tasks, such as Visual Question Answering (VQA), one of the
most popular problems in the VL field. All of these pre-trained models (such as
VisualBERT, ViLBERT, LXMERT and UNITER) are built with Transformer, which
extends the classical attention mechanism to multiple layers and heads. To
investigate why and how these models work on VQA so well, in this paper we
explore the roles of individual heads and layers in Transformer models when
handling $12$ different types of questions. Specifically, we manually remove
(chop) heads (or layers) from a pre-trained VisualBERT model at a time, and
test it on different levels of questions to record its performance. As shown in
the interesting echelon shape of the result matrices, experiments reveal
different heads and layers are responsible for different question types, with
higher-level layers activated by higher-level visual reasoning questions. Based
on this observation, we design a dynamic chopping module that can automatically
remove heads and layers of the VisualBERT at an instance level when dealing
with different questions. Our dynamic chopping module can effectively reduce
the parameters of the original model by 50%, while only damaging the accuracy
by less than 1% on the VQA task. | [
"cs.CV",
"68T45",
"I.4.8"
] |
HD (High Definition) map based on 3D lidar plays a vital role in autonomous
vehicle localization, planning, decision-making, perception, etc. Many 3D lidar
mapping technologies related to SLAM (Simultaneous Localization and Mapping)
are used in HD map construction to ensure its high accuracy. To evaluate the
accuracy of 3D lidar mapping, the most common methods use ground truth of poses
to calculate the error between estimated poses and ground truth, however it's
usually so difficult to get the ground truth of poses in the actual lidar
mapping for autonomous vehicle. In this paper, we proposed a relative accuracy
evaluation algorithm that can automatically evaluate the accuracy of HD map
built by 3D lidar mapping without ground truth. A method for detecting the
degree of ghosting in point cloud map quantitatively is designed to reflect the
accuracy indirectly, which takes advantage of the principle of light traveling
in a straight line and the fact that light can not penetrate opaque objects.
Our experimental results confirm that the proposed evaluation algorithm can
automatically and efficiently detect the bad poses whose accuracy are less than
the set threshold such as 0.1m, then calculate the bad poses percentage P_bad
in all estimated poses to obtain the final accuracy metric P_acc = 1 - P_bad. | [
"cs.CV",
"cs.RO"
] |
Graph representation learning is of paramount importance for a variety of
graph analytical tasks, ranging from node classification to community
detection. Recently, graph convolutional networks (GCNs) have been successfully
applied for graph representation learning. These GCNs generate node
representation by aggregating features from the neighborhoods, which follows
the "neighborhood aggregation" scheme. In spite of having achieved promising
performance on various tasks, existing GCN-based models have difficulty in well
capturing complicated non-linearity of graph data. In this paper, we first
theoretically prove that coefficients of the neighborhood interacting terms are
relatively small in current models, which explains why GCNs barely outperforms
linear models. Then, in order to better capture the complicated non-linearity
of graph data, we present a novel GraphAIR framework which models the
neighborhood interaction in addition to neighborhood aggregation. Comprehensive
experiments conducted on benchmark tasks including node classification and link
prediction using public datasets demonstrate the effectiveness of the proposed
method. | [
"cs.LG",
"stat.ML"
] |
In this paper, we introduce a new benchmark dataset named IPN Hand with
sufficient size, variety, and real-world elements able to train and evaluate
deep neural networks. This dataset contains more than 4,000 gesture samples and
800,000 RGB frames from 50 distinct subjects. We design 13 different static and
dynamic gestures focused on interaction with touchless screens. We especially
consider the scenario when continuous gestures are performed without transition
states, and when subjects perform natural movements with their hands as
non-gesture actions. Gestures were collected from about 30 diverse scenes, with
real-world variation in background and illumination. With our dataset, the
performance of three 3D-CNN models is evaluated on the tasks of isolated and
continuous real-time HGR. Furthermore, we analyze the possibility of increasing
the recognition accuracy by adding multiple modalities derived from RGB frames,
i.e., optical flow and semantic segmentation, while keeping the real-time
performance of the 3D-CNN model. Our empirical study also provides a comparison
with the publicly available nvGesture (NVIDIA) dataset. The experimental
results show that the state-of-the-art ResNext-101 model decreases about 30%
accuracy when using our real-world dataset, demonstrating that the IPN Hand
dataset can be used as a benchmark, and may help the community to step forward
in the continuous HGR. Our dataset and pre-trained models used in the
evaluation are publicly available at https://github.com/GibranBenitez/IPN-hand. | [
"cs.CV"
] |
Graph are a ubiquitous data representation, as they represent a flexible and
compact representation. For instance, the 3D structure of RNA can be
efficiently represented as $\textit{2.5D graphs}$, graphs whose nodes are
nucleotides and edges represent chemical interactions. In this setting, we have
biological evidence of the similarity between the edge types, as some chemical
interactions are more similar than others.
Machine learning on graphs have recently experienced a breakthrough with the
introduction of Graph Neural Networks. This algorithm can be framed as a
message passing algorithm between graph nodes over graph edges. These messages
can depend on the edge type they are transmitted through, but no method
currently constrains how a message is altered when the edge type changes.
Motivated by the RNA use case, in this project we introduce a graph neural
network layer which can leverage prior information about similarities between
edges. We show that despite the theoretical appeal of including this similarity
prior, the empirical performance is not enhanced on the tasks and datasets we
include here. | [
"cs.LG",
"stat.ML"
] |
Designing optimal reward functions has been desired but extremely difficult
in reinforcement learning (RL). When it comes to modern complex tasks,
sophisticated reward functions are widely used to simplify policy learning yet
even a tiny adjustment on them is expensive to evaluate due to the drastically
increasing cost of training. To this end, we propose a hindsight reward
tweaking approach by designing a novel paradigm for deep reinforcement learning
to model the influences of reward functions within a near-optimal space. We
simply extend the input observation with a condition vector linearly correlated
with the effective environment reward parameters and train the model in a
conventional manner except for randomizing reward configurations, obtaining a
hyper-policy whose characteristics are sensitively regulated over the condition
space. We demonstrate the feasibility of this approach and study one of its
potential application in policy performance boosting with multiple MuJoCo
tasks. | [
"cs.LG",
"cs.AI"
] |
Human identification plays a prominent role in terms of security. In modern
times security is becoming the key term for an individual or a country,
especially for countries which are facing internal or external threats. Gait
analysis is interpreted as the systematic study of the locomotive in humans. It
can be used to extract the exact walking features of individuals. Walking
features depends on biological as well as the physical feature of the object;
hence, it is unique to every individual. In this work, gait features are used
to identify an individual. The steps involve object detection, background
subtraction, silhouettes extraction, skeletonization, and training 3D
Convolution Neural Network on these gait features. The model is trained and
evaluated on the dataset acquired by CASIA B Gait, which consists of 15000
videos of 124 subjects walking pattern captured from 11 different angles
carrying objects such as bag and coat. The proposed method focuses more on the
lower body part to extract features such as the angle between knee and thighs,
hip angle, angle of contact, and many other features. The experimental results
are compared with amongst accuracies of silhouettes as datasets for training
and skeletonized image as training data. The results show that extracting the
information from skeletonized data yields improved accuracy. | [
"cs.CV",
"cs.AI",
"This paper tells us how human can be identified by their Gait cycle\n using any simple camera"
] |
Graph Neural Networks (GNNs) have received increasing attention in many
fields. However, due to the lack of prior graphs, their use for semantic
labeling has been limited. Here, we propose a novel architecture called the
Self-Constructing Graph (SCG), which makes use of learnable latent variables to
generate embeddings and to self-construct the underlying graphs directly from
the input features without relying on manually built prior knowledge graphs.
SCG can automatically obtain optimized non-local context graphs from
complex-shaped objects in aerial imagery. We optimize SCG via an adaptive
diagonal enhancement method and a variational lower bound that consists of a
customized graph reconstruction term and a Kullback-Leibler divergence
regularization term. We demonstrate the effectiveness and flexibility of the
proposed SCG on the publicly available ISPRS Vaihingen dataset and our model
SCG-Net achieves competitive results in terms of F1-score with much fewer
parameters and at a lower computational cost compared to related pure-CNN based
work. Our code will be made public soon. | [
"cs.CV"
] |
Bayesian optimization (BO) is a successful methodology to optimize black-box
functions that are expensive to evaluate. While traditional methods optimize
each black-box function in isolation, there has been recent interest in
speeding up BO by transferring knowledge across multiple related black-box
functions. In this work, we introduce a method to automatically design the BO
search space by relying on evaluations of previous black-box functions. We
depart from the common practice of defining a set of arbitrary search ranges a
priori by considering search space geometries that are learned from historical
data. This simple, yet effective strategy can be used to endow many existing BO
methods with transfer learning properties. Despite its simplicity, we show that
our approach considerably boosts BO by reducing the size of the search space,
thus accelerating the optimization of a variety of black-box optimization
problems. In particular, the proposed approach combined with random search
results in a parameter-free, easy-to-implement, robust hyperparameter
optimization strategy. We hope it will constitute a natural baseline for
further research attempting to warm-start BO. | [
"stat.ML",
"cs.LG"
] |
Novelty detection is the process of identifying the observation(s) that
differ in some respect from the training observations (the target class). In
reality, the novelty class is often absent during training, poorly sampled or
not well defined. Therefore, one-class classifiers can efficiently model such
problems. However, due to the unavailability of data from the novelty class,
training an end-to-end deep network is a cumbersome task. In this paper,
inspired by the success of generative adversarial networks for training deep
models in unsupervised and semi-supervised settings, we propose an end-to-end
architecture for one-class classification. Our architecture is composed of two
deep networks, each of which trained by competing with each other while
collaborating to understand the underlying concept in the target class, and
then classify the testing samples. One network works as the novelty detector,
while the other supports it by enhancing the inlier samples and distorting the
outliers. The intuition is that the separability of the enhanced inliers and
distorted outliers is much better than deciding on the original samples. The
proposed framework applies to different related applications of anomaly and
outlier detection in images and videos. The results on MNIST and Caltech-256
image datasets, along with the challenging UCSD Ped2 dataset for video anomaly
detection illustrate that our proposed method learns the target class
effectively and is superior to the baseline and state-of-the-art methods. | [
"cs.CV"
] |
Deep learning has achieved remarkable successes in solving challenging
reinforcement learning (RL) problems when dense reward function is provided.
However, in sparse reward environment it still often suffers from the need to
carefully shape reward function to guide policy optimization. This limits the
applicability of RL in the real world since both reinforcement learning and
domain-specific knowledge are required. It is therefore of great practical
importance to develop algorithms which can learn from a binary signal
indicating successful task completion or other unshaped, sparse reward signals.
We propose a novel method called competitive experience replay, which
efficiently supplements a sparse reward by placing learning in the context of
an exploration competition between a pair of agents. Our method complements the
recently proposed hindsight experience replay (HER) by inducing an automatic
exploratory curriculum. We evaluate our approach on the tasks of reaching
various goal locations in an ant maze and manipulating objects with a robotic
arm. Each task provides only binary rewards indicating whether or not the goal
is achieved. Our method asymmetrically augments these sparse rewards for a pair
of agents each learning the same task, creating a competitive game designed to
drive exploration. Extensive experiments demonstrate that this method leads to
faster converge and improved task performance. | [
"cs.LG",
"stat.ML"
] |
This work presents the first convolutional neural network that learns an
image-to-graph translation task without needing external supervision. Obtaining
graph representations of image content, where objects are represented as nodes
and their relationships as edges, is an important task in scene understanding.
Current approaches follow a fully-supervised approach thereby requiring
meticulous annotations. To overcome this, we are the first to present a
self-supervised approach based on a fully-differentiable auto-encoder in which
the bottleneck encodes the graph's nodes and edges. This self-supervised
approach can currently encode simple line drawings into graphs and obtains
comparable results to a fully-supervised baseline in terms of F1 score on
triplet matching. Besides these promising results, we provide several
directions for future research on how our approach can be extended to cover
more complex imagery. | [
"cs.CV"
] |
Transfer Learning (TL) has shown great potential to accelerate Reinforcement
Learning (RL) by leveraging prior knowledge from past learned policies of
relevant tasks. Existing transfer approaches either explicitly computes the
similarity between tasks or select appropriate source policies to provide
guided explorations for the target task. However, how to directly optimize the
target policy by alternatively utilizing knowledge from appropriate source
policies without explicitly measuring the similarity is currently missing. In
this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL
by taking advantage of this idea. Our framework learns when and which source
policy is the best to reuse for the target policy and when to terminate it by
modeling multi-policy transfer as the option learning problem. PTF can be
easily combined with existing deep RL approaches. Experimental results show it
significantly accelerates the learning process and surpasses state-of-the-art
policy transfer methods in terms of learning efficiency and final performance
in both discrete and continuous action spaces. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We propose ScheduleNet, a RL-based real-time scheduler, that can solve
various types of multi-agent scheduling problems. We formulate these problems
as a semi-MDP with episodic reward (makespan) and learn ScheduleNet, a
decentralized decision-making policy that can effectively coordinate multiple
agents to complete tasks. The decision making procedure of ScheduleNet
includes: (1) representing the state of a scheduling problem with the
agent-task graph, (2) extracting node embeddings for agent and tasks nodes, the
important relational information among agents and tasks, by employing the
type-aware graph attention (TGA), and (3) computing the assignment probability
with the computed node embeddings. We validate the effectiveness of ScheduleNet
as a general learning-based scheduler for solving various types of multi-agent
scheduling tasks, including multiple salesman traveling problem (mTSP) and job
shop scheduling problem (JSP). | [
"cs.LG",
"cs.AI",
"cs.MA",
"cs.SY",
"eess.SY"
] |
Decision trees and their ensembles are endowed with a rich set of diagnostic
tools for ranking and screening variables in a predictive model. Despite the
widespread use of tree based variable importance measures, pinning down their
theoretical properties has been challenging and therefore largely unexplored.
To address this gap between theory and practice, we derive finite sample
performance guarantees for variable selection in nonparametric models using a
single-level CART decision tree (a decision stump). Under standard operating
assumptions in variable screening literature, we find that the marginal signal
strength of each variable and ambient dimensionality can be considerably weaker
and higher, respectively, than state-of-the-art nonparametric variable
selection methods. Furthermore, unlike previous marginal screening methods that
attempt to directly estimate each marginal projection via a truncated basis
expansion, the fitted model used here is a simple, parsimonious decision stump,
thereby eliminating the need for tuning the number of basis terms. Thus,
surprisingly, even though decision stumps are highly inaccurate for estimation
purposes, they can still be used to perform consistent model selection. | [
"stat.ML",
"cs.LG"
] |
Facial expressions play a significant role in human communication and
behavior. Psychologists have long studied the relationship between facial
expressions and emotions. Paul Ekman et al., devised the Facial Action Coding
System (FACS) to taxonomize human facial expressions and model their behavior.
The ability to recognize facial expressions automatically, enables novel
applications in fields like human-computer interaction, social gaming, and
psychological research. There has been a tremendously active research in this
field, with several recent papers utilizing convolutional neural networks (CNN)
for feature extraction and inference. In this paper, we employ CNN
understanding methods to study the relation between the features these
computational networks are using, the FACS and Action Units (AU). We verify our
findings on the Extended Cohn-Kanade (CK+), NovaEmotions and FER2013 datasets.
We apply these models to various tasks and tests using transfer learning,
including cross-dataset validation and cross-task performance. Finally, we
exploit the nature of the FER based CNN models for the detection of
micro-expressions and achieve state-of-the-art accuracy using a simple
long-short-term-memory (LSTM) recurrent neural network (RNN). | [
"cs.CV"
] |
Treatment planning in low-dose-rate prostate brachytherapy (LDR-PB) aims to
produce arrangement of implantable radioactive seeds that deliver a minimum
prescribed dose to the prostate whilst minimizing toxicity to healthy tissues.
There can be multiple seed arrangements that satisfy this dosimetric criterion,
not all deemed 'acceptable' for implant from a physician's perspective. This
leads to plans that are subjective to the physician's/centre's preference,
planning style, and expertise. We propose a method that aims to reduce this
variability by training a model to learn from a large pool of successful
retrospective LDR-PB data (961 patients) and create consistent plans that mimic
the high-quality manual plans. Our model is based on conditional generative
adversarial networks that use a novel loss function for penalizing the model on
spatial constraints of the seeds. An optional optimizer based on a simulated
annealing (SA) algorithm can be used to further fine-tune the plans if
necessary (determined by the treating physician). Performance analysis was
conducted on 150 test cases demonstrating comparable results to that of the
manual prehistorical plans. On average, the clinical target volume covering
100% of the prescribed dose was 98.9% for our method compared to 99.4% for
manual plans. Moreover, using our model, the planning time was significantly
reduced to an average of 2.5 mins/plan with SA, and less than 3 seconds without
SA. Compared to this, manual planning at our centre takes around 20 mins/plan. | [
"cs.CV",
"physics.med-ph",
"I.5.1; I.5.2; I.5.4"
] |
Can we teach a robot to recognize and make predictions for activities that it
has never seen before? We tackle this problem by learning models for video from
text. This paper presents a hierarchical model that generalizes instructional
knowledge from large-scale text-corpora and transfers the knowledge to video.
Given a portion of an instructional video, our model recognizes and predicts
coherent and plausible actions multiple steps into the future, all in rich
natural language. To demonstrate the capabilities of our model, we introduce
the \emph{Tasty Videos Dataset V2}, a collection of 4022 recipes for zero-shot
learning, recognition and anticipation. Extensive experiments with various
evaluation metrics demonstrate the potential of our method for generalization,
given limited video data for training models. | [
"cs.CV"
] |
The COVID-19 pandemic represents the most significant public health disaster
since the 1918 influenza pandemic. During pandemics such as COVID-19, timely
and reliable spatio-temporal forecasting of epidemic dynamics is crucial. Deep
learning-based time series models for forecasting have recently gained
popularity and have been successfully used for epidemic forecasting. Here we
focus on the design and analysis of deep learning-based models for COVID-19
forecasting. We implement multiple recurrent neural network-based deep learning
models and combine them using the stacking ensemble technique. In order to
incorporate the effects of multiple factors in COVID-19 spread, we consider
multiple sources such as COVID-19 confirmed and death case count data and
testing data for better predictions. To overcome the sparsity of training data
and to address the dynamic correlation of the disease, we propose
clustering-based training for high-resolution forecasting. The methods help us
to identify the similar trends of certain groups of regions due to various
spatio-temporal effects. We examine the proposed method for forecasting weekly
COVID-19 new confirmed cases at county-, state-, and country-level. A
comprehensive comparison between different time series models in COVID-19
context is conducted and analyzed. The results show that simple deep learning
models can achieve comparable or better performance when compared with more
complicated models. We are currently integrating our methods as a part of our
weekly forecasts that we provide state and federal authorities. | [
"cs.LG",
"stat.AP"
] |
We examine the problem of learning and planning on high-dimensional domains
with long horizons and sparse rewards. Recent approaches have shown great
successes in many Atari 2600 domains. However, domains with long horizons and
sparse rewards, such as Montezuma's Revenge and Venture, remain challenging for
existing methods. Methods using abstraction (Dietterich 2000; Sutton, Precup,
and Singh 1999) have shown to be useful in tackling long-horizon problems. We
combine recent techniques of deep reinforcement learning with existing
model-based approaches using an expert-provided state abstraction. We construct
toy domains that elucidate the problem of long horizons, sparse rewards and
high-dimensional inputs, and show that our algorithm significantly outperforms
previous methods on these domains. Our abstraction-based approach outperforms
Deep Q-Networks (Mnih et al. 2015) on Montezuma's Revenge and Venture, and
exhibits backtracking behavior that is absent from previous methods. | [
"cs.LG",
"cs.AI"
] |
Causal modeling has been recognized as a potential solution to many
challenging problems in machine learning (ML). Here, we describe how a recently
proposed counterfactual approach developed to deconfound linear structural
causal models can still be used to deconfound the feature representations
learned by deep neural network (DNN) models. The key insight is that by
training an accurate DNN using softmax activation at the classification layer,
and then adopting the representation learned by the last layer prior to the
output layer as our features, we have that, by construction, the learned
features will fit well a (multi-class) logistic regression model, and will be
linearly associated with the labels. As a consequence, deconfounding approaches
based on simple linear models can be used to deconfound the feature
representations learned by DNNs. We validate the proposed methodology using
colored versions of the MNIST dataset. Our results illustrate how the approach
can effectively combat confounding and improve model stability in the context
of dataset shifts generated by selection biases. | [
"cs.LG",
"stat.ML"
] |
We develop an approach to learning visual representations that embraces
multimodal data, driven by a combination of intra- and inter-modal similarity
preservation objectives. Unlike existing visual pre-training methods, which
solve a proxy prediction task in a single domain, our method exploits intrinsic
data properties within each modality and semantic information from cross-modal
correlation simultaneously, hence improving the quality of learned visual
representations. By including multimodal training in a unified framework with
different types of contrastive losses, our method can learn more powerful and
generic visual features. We first train our model on COCO and evaluate the
learned visual representations on various downstream tasks including image
classification, object detection, and instance segmentation. For example, the
visual representations pre-trained on COCO by our method achieve
state-of-the-art top-1 validation accuracy of $55.3\%$ on ImageNet
classification, under the common transfer protocol. We also evaluate our method
on the large-scale Stock images dataset and show its effectiveness on
multi-label image tagging, and cross-modal retrieval tasks. | [
"cs.CV"
] |
The objective of this work is to annotate sign instances across a broad
vocabulary in continuous sign language. We train a Transformer model to ingest
a continuous signing stream and output a sequence of written tokens on a
large-scale collection of signing footage with weakly-aligned subtitles. We
show that through this training it acquires the ability to attend to a large
vocabulary of sign instances in the input sequence, enabling their
localisation. Our contributions are as follows: (1) we demonstrate the ability
to leverage large quantities of continuous signing videos with weakly-aligned
subtitles to localise signs in continuous sign language; (2) we employ the
learned attention to automatically generate hundreds of thousands of
annotations for a large sign vocabulary; (3) we collect a set of 37K manually
verified sign instances across a vocabulary of 950 sign classes to support our
study of sign language recognition; (4) by training on the newly annotated data
from our method, we outperform the prior state of the art on the BSL-1K sign
language recognition benchmark. | [
"cs.CV"
] |
We present an autoencoder-based semi-supervised approach to classify
perceived human emotions from walking styles obtained from videos or
motion-captured data and represented as sequences of 3D poses. Given the motion
on each joint in the pose at each time step extracted from 3D pose sequences,
we hierarchically pool these joint motions in a bottom-up manner in the
encoder, following the kinematic chains in the human body. We also constrain
the latent embeddings of the encoder to contain the space of
psychologically-motivated affective features underlying the gaits. We train the
decoder to reconstruct the motions per joint per time step in a top-down manner
from the latent embeddings. For the annotated data, we also train a classifier
to map the latent embeddings to emotion labels. Our semi-supervised approach
achieves a mean average precision of 0.84 on the Emotion-Gait benchmark
dataset, which contains both labeled and unlabeled gaits collected from
multiple sources. We outperform current state-of-art algorithms for both
emotion recognition and action recognition from 3D gaits by 7%--23% on the
absolute. More importantly, we improve the average precision by 10%--50% on the
absolute on classes that each makes up less than 25% of the labeled part of the
Emotion-Gait benchmark dataset. | [
"cs.CV",
"cs.LG"
] |
We propose a new approach to visualize saliency maps for deep neural network
models and apply it to deep reinforcement learning agents trained on Atari
environments. Our method adds an attention module that we call FLS (Free Lunch
Saliency) to the feature extractor from an established baseline (Mnih et al.,
2015). This addition results in a trainable model that can produce saliency
maps, i.e., visualizations of the importance of different parts of the input
for the agent's current decision making. We show experimentally that a network
with an FLS module exhibits performance similar to the baseline (i.e., it is
"free", with no performance cost) and can be used as a drop-in replacement for
reinforcement learning agents. We also design another feature extractor that
scores slightly lower but provides higher-fidelity visualizations. In addition
to attained scores, we report saliency metrics evaluated on the Atari-HEAD
dataset of human gameplay. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Irregularly-sampled time series occur in many domains including healthcare.
They can be challenging to model because they do not naturally yield a
fixed-dimensional representation as required by many standard machine learning
models. In this paper, we consider irregular sampling from the perspective of
missing data. We model observed irregularly-sampled time series data as a
sequence of index-value pairs sampled from a continuous but unobserved
function. We introduce an encoder-decoder framework for learning from such
generic indexed sequences. We propose learning methods for this framework based
on variational autoencoders and generative adversarial networks. For continuous
irregularly-sampled time series, we introduce continuous convolutional layers
that can efficiently interface with existing neural network architectures.
Experiments show that our models are able to achieve competitive or better
classification results on irregularly-sampled multivariate time series compared
to recent RNN models while offering significantly faster training times. | [
"cs.LG",
"stat.ML"
] |
Distributional approaches to value-based reinforcement learning model the
entire distribution of returns, rather than just their expected values, and
have recently been shown to yield state-of-the-art empirical performance. This
was demonstrated by the recently proposed C51 algorithm, based on categorical
distributional reinforcement learning (CDRL) [Bellemare et al., 2017]. However,
the theoretical properties of CDRL algorithms are not yet well understood. In
this paper, we introduce a framework to analyse CDRL algorithms, establish the
importance of the projected distributional Bellman operator in distributional
RL, draw fundamental connections between CDRL and the Cram\'er distance, and
give a proof of convergence for sample-based categorical distributional
reinforcement learning algorithms. | [
"stat.ML"
] |
Reinforcement learning addresses the dilemma between exploration to find
profitable actions and exploitation to act according to the best observations
already made. Bandit problems are one such class of problems in stateless
environments that represent this explore/exploit situation. We propose a
learning algorithm for bandit problems based on fractional expectation of
rewards acquired. The algorithm is theoretically shown to converge on an
eta-optimal arm and achieve O(n) sample complexity. Experimental results show
the algorithm incurs substantially lower regrets than parameter-optimized
eta-greedy and SoftMax approaches and other low sample complexity
state-of-the-art techniques. | [
"cs.LG",
"stat.ML"
] |
Effectively and efficiently deploying graph neural networks (GNNs) at scale
remains one of the most challenging aspects of graph representation learning.
Many powerful solutions have only ever been validated on comparatively small
datasets, often with counter-intuitive outcomes -- a barrier which has been
broken by the Open Graph Benchmark Large-Scale Challenge (OGB-LSC). We entered
the OGB-LSC with two large-scale GNNs: a deep transductive node classifier
powered by bootstrapping, and a very deep (up to 50-layer) inductive graph
regressor regularised by denoising objectives. Our models achieved an
award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks. In
doing so, we demonstrate evidence of scalable self-supervised graph
representation learning, and utility of very deep GNNs -- both very important
open issues. Our code is publicly available at:
https://github.com/deepmind/deepmind-research/tree/master/ogb_lsc. | [
"cs.LG",
"cs.AI",
"cs.SI",
"stat.ML"
] |
A molecular and cellular understanding of how SARS-CoV-2 variably infects and
causes severe COVID-19 remains a bottleneck in developing interventions to end
the pandemic. We sought to use deep learning to study the biology of SARS-CoV-2
infection and COVID-19 severity by identifying transcriptomic patterns and cell
types associated with SARS-CoV-2 infection and COVID-19 severity. To do this,
we developed a new approach to generating self-supervised edge features. We
propose a model that builds on Graph Attention Networks (GAT), creates edge
features using self-supervised learning, and ingests these edge features via a
Set Transformer. This model achieves significant improvements in predicting the
disease state of individual cells, given their transcriptome. We apply our
model to single-cell RNA sequencing datasets of SARS-CoV-2 infected lung
organoids and bronchoalveolar lavage fluid samples of patients with COVID-19,
achieving state-of-the-art performance on both datasets with our model. We then
borrow from the field of explainable AI (XAI) to identify the features (genes)
and cell types that discriminate bystander vs. infected cells across time and
moderate vs. severe COVID-19 disease. To the best of our knowledge, this
represents the first application of deep learning to identifying the molecular
and cellular determinants of SARS-CoV-2 infection and COVID-19 severity using
single-cell omics data. | [
"cs.LG",
"q-bio.GN",
"stat.ML"
] |
This paper presents an approach to improve the forecast of computational
fluid dynamics (CFD) simulations of urban air pollution using deep learning,
and most specifically adversarial training. This adversarial approach aims to
reduce the divergence of the forecasts from the underlying physical model. Our
two-step method integrates a Principal Components Analysis (PCA) based
adversarial autoencoder (PC-AAE) with adversarial Long short-term memory (LSTM)
networks. Once the reduced-order model (ROM) of the CFD solution is obtained
via PCA, an adversarial autoencoder is used on the principal components time
series. Subsequentially, a Long Short-Term Memory network (LSTM) is
adversarially trained on the latent space produced by the PC-AAE to make
forecasts. Once trained, the adversarially trained LSTM outperforms a LSTM
trained in a classical way. The study area is in South London, including
three-dimensional velocity vectors in a busy traffic junction. | [
"cs.LG",
"physics.comp-ph",
"physics.flu-dyn"
] |
Deep Reinforcement Learning (RL) recently emerged as one of the most
competitive approaches for learning in sequential decision making problems with
fully observable environments, e.g., computer Go. However, very little work has
been done in deep RL to handle partially observable environments. We propose a
new architecture called Action-specific Deep Recurrent Q-Network (ADRQN) to
enhance learning performance in partially observable domains. Actions are
encoded by a fully connected layer and coupled with a convolutional observation
to form an action-observation pair. The time series of action-observation pairs
are then integrated by an LSTM layer that learns latent states based on which a
fully connected layer computes Q-values as in conventional Deep Q-Networks
(DQNs). We demonstrate the effectiveness of our new architecture in several
partially observable domains, including flickering Atari games. | [
"cs.LG"
] |
People identification in video based on the way they walk (i.e. gait) is a
relevant task in computer vision using a non-invasive approach. Standard and
current approaches typically derive gait signatures from sequences of binary
energy maps of subjects extracted from images, but this process introduces a
large amount of non-stationary noise, thus, conditioning their efficacy. In
contrast, in this paper we focus on the raw pixels, or simple functions derived
from them, letting advanced learning techniques to extract relevant features.
Therefore, we present a comparative study of different Convolutional Neural
Network (CNN) architectures by using three different modalities (i.e. gray
pixels, optical flow channels and depth maps) on two widely-adopted and
challenging datasets: TUM-GAID and CASIA-B. In addition, we perform a
comparative study between different early and late fusion methods used to
combine the information obtained from each kind of modalities. Our experimental
results suggest that (i) the raw pixel values represent a competitive input
modality, compared to the traditional state-of-the-art silhouette-based
features (e.g. GEI), since equivalent or better results are obtained; (ii) the
fusion of the raw pixel information with information from optical flow and
depth maps allows to obtain state-of-the-art results on the gait recognition
task with an image resolution several times smaller than the previously
reported results; and, (iii) the selection and the design of the CNN
architecture are critical points that can make a difference between
state-of-the-art results or poor ones. | [
"cs.CV"
] |
The Shapley value is one of the most widely used model-agnostic measures of
feature importance in explainable AI: it has clear axiomatic foundations, is
guaranteed to uniquely exist, and has a clear interpretation as a feature's
average effect on a model's prediction. We introduce joint Shapley values,
which directly extend the Shapley axioms. This preserves the classic Shapley
value's intuitions: joint Shapley values measure a set of features' average
effect on a model's prediction. We prove the uniqueness of joint Shapley
values, for any order of explanation. Results for games show that joint Shapley
values present different insights from existing interaction indices, which
assess the effect of a feature within a set of features. Deriving joint Shapley
values in ML attribution problems thus gives us the first measure of the joint
effect of sets of features on model predictions. In a dataset with binary
features, we present a presence-adjusted method for calculating global values
that retains the efficiency property. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
The diurnal cycle of tropical cyclones (TCs) is a daily cycle in clouds that
appears in satellite images and may have implications for TC structure and
intensity. The diurnal pattern can be seen in infrared (IR) satellite imagery
as cyclical pulses in the cloud field that propagate radially outward from the
center of nearly all Atlantic-basin TCs. These diurnal pulses, a distinguishing
characteristic of the TC diurnal cycle, begin forming in the storm's inner core
near sunset each day and appear as a region of cooling cloud-top temperatures.
The area of cooling takes on a ring-like appearance as cloud-top warming occurs
on its inside edge and the cooling moves away from the storm overnight,
reaching several hundred kilometers from the circulation center by the
following afternoon. The state-of-the-art TC diurnal cycle measurement has a
limited ability to analyze the behavior beyond qualitative observations. We
present a method for quantifying the TC diurnal cycle using one-dimensional
persistent homology, a tool from Topological Data Analysis, by tracking maximum
persistence and quantifying the cycle using the discrete Fourier transform.
Using Geostationary Operational Environmental Satellite IR imagery data from
Hurricane Felix (2007), our method is able to detect an approximate daily
cycle. | [
"cs.CV",
"cs.CG"
] |
Transfer learning using deep neural networks as feature extractors has become
increasingly popular over the past few years. It allows to obtain
state-of-the-art accuracy on datasets too small to train a deep neural network
on its own, and it provides cutting edge descriptors that, combined with
nonparametric learning methods, allow rapid and flexible deployment of
performing solutions in computationally restricted settings. In this paper, we
are interested in showing that the features extracted using deep neural
networks have specific properties which can be used to improve accuracy of
downstream nonparametric learning methods. Namely, we demonstrate that for some
distributions where information is embedded in a few coordinates, segmenting
feature vectors can lead to better accuracy. We show how this model can be
applied to real datasets by performing experiments using three mainstream deep
neural network feature extractors and four databases, in vision and audio. | [
"cs.LG",
"stat.ML"
] |
Motivated by the observation that the ability of the $\ell_1$ norm in
promoting sparsity in graphical models with Laplacian constraints is much
weakened, this paper proposes to learn graph Laplacian with a non-convex
penalty: minimax concave penalty (MCP). For solving the MCP penalized graphical
model, we design an inexact proximal difference-of-convex algorithm (DCA) and
prove its convergence to critical points. We note that each subproblem of the
proximal DCA enjoys the nice property that the objective function in its dual
problem is continuously differentiable with a semismooth gradient. Therefore,
we apply an efficient semismooth Newton method to subproblems of the proximal
DCA. Numerical experiments on various synthetic and real data sets demonstrate
the effectiveness of the non-convex penalty MCP in promoting sparsity. Compared
with the state-of-the-art method \cite[Algorithm~1]{ying2020does}, our method
is demonstrated to be more efficient and reliable for learning graph Laplacian
with MCP. | [
"cs.LG",
"math.OC"
] |
Graph Neural Networks (GNNs) have proved to be an effective representation
learning framework for graph-structured data, and have achieved
state-of-the-art performance on many practical predictive tasks, such as node
classification, link prediction and graph classification. Among the variants of
GNNs, Graph Attention Networks (GATs) learn to assign dense attention
coefficients over all neighbors of a node for feature aggregation, and improve
the performance of many graph learning tasks. However, real-world graphs are
often very large and noisy, and GATs are prone to overfitting if not
regularized properly. Even worse, the local aggregation mechanism of GATs may
fail on disassortative graphs, where nodes within local neighborhood provide
more noise than useful information for feature aggregation. In this paper, we
propose Sparse Graph Attention Networks (SGATs) that learn sparse attention
coefficients under an $L_0$-norm regularization, and the learned sparse
attentions are then used for all GNN layers, resulting in an edge-sparsified
graph. By doing so, we can identify noisy/task-irrelevant edges, and thus
perform feature aggregation on most informative neighbors. Extensive
experiments on synthetic and real-world graph learning benchmarks demonstrate
the superior performance of SGATs. In particular, SGATs can remove about
50\%-80\% edges from large assortative graphs, while retaining similar
classification accuracies. On disassortative graphs, SGATs prune majority of
noisy edges and outperform GATs in classification accuracies by significant
margins. Furthermore, the removed edges can be interpreted intuitively and
quantitatively. To the best of our knowledge, this is the first graph learning
algorithm that shows significant redundancies in graphs and edge-sparsified
graphs can achieve similar or sometimes higher predictive performances than
original graphs. | [
"cs.LG",
"stat.ML"
] |
Most existing GANs architectures that generate images use transposed
convolution or resize-convolution as their upsampling algorithm from lower to
higher resolution feature maps in the generator. We argue that this kind of
fixed operation is problematic for GANs to model objects that have very
different visual appearances. We propose a novel adaptive convolution method
that learns the upsampling algorithm based on the local context at each
location to address this problem. We modify a baseline GANs architecture by
replacing normal convolutions with adaptive convolutions in the generator.
Experiments on CIFAR-10 dataset show that our modified models improve the
baseline model by a large margin. Furthermore, our models achieve
state-of-the-art performance on CIFAR-10 and STL-10 datasets in the
unsupervised setting. | [
"cs.CV",
"stat.ML"
] |
As the representations output by Graph Neural Networks (GNNs) are
increasingly employed in real-world applications, it becomes important to
ensure that these representations are fair and stable. In this work, we
establish a key connection between counterfactual fairness and stability and
leverage it to propose a novel framework, NIFTY (uNIfying Fairness and
stabiliTY), which can be used with any GNN to learn fair and stable
representations. We introduce a novel objective function that simultaneously
accounts for fairness and stability and develop a layer-wise weight
normalization using the Lipschitz constant to enhance neural message passing in
GNNs. In doing so, we enforce fairness and stability both in the objective
function as well as in the GNN architecture. Further, we show theoretically
that our layer-wise weight normalization promotes counterfactual fairness and
stability in the resulting representations. We introduce three new graph
datasets comprising of high-stakes decisions in criminal justice and financial
lending domains. Extensive experimentation with the above datasets demonstrates
the efficacy of our framework. | [
"cs.LG"
] |
The teacher-student (T/S) learning has been shown to be effective for a
variety of problems such as domain adaptation and model compression. One
shortcoming of the T/S learning is that a teacher model, not always perfect,
sporadically produces wrong guidance in form of posterior probabilities that
misleads the student model towards a suboptimal performance. To overcome this
problem, we propose a conditional T/S learning scheme, in which a "smart"
student model selectively chooses to learn from either the teacher model or the
ground truth labels conditioned on whether the teacher can correctly predict
the ground truth. Unlike a naive linear combination of the two knowledge
sources, the conditional learning is exclusively engaged with the teacher model
when the teacher model's prediction is correct, and otherwise backs off to the
ground truth. Thus, the student model is able to learn effectively from the
teacher and even potentially surpass the teacher. We examine the proposed
learning scheme on two tasks: domain adaptation on CHiME-3 dataset and speaker
adaptation on Microsoft short message dictation dataset. The proposed method
achieves 9.8% and 12.8% relative word error rate reductions, respectively, over
T/S learning for environment adaptation and speaker-independent model for
speaker adaptation. | [
"cs.LG",
"cs.CL",
"cs.SD",
"eess.AS",
"stat.ML"
] |
In imitation learning from observation IfO, a learning agent seeks to imitate
a demonstrating agent using only observations of the demonstrated behavior
without access to the control signals generated by the demonstrator. Recent
methods based on adversarial imitation learning have led to state-of-the-art
performance on IfO problems, but they typically suffer from high sample
complexity due to a reliance on data-inefficient, model-free reinforcement
learning algorithms. This issue makes them impractical to deploy in real-world
settings, where gathering samples can incur high costs in terms of time,
energy, and risk. In this work, we hypothesize that we can incorporate ideas
from model-based reinforcement learning with adversarial methods for IfO in
order to increase the data efficiency of these methods without sacrificing
performance. Specifically, we consider time-varying linear Gaussian policies,
and propose a method that integrates the linear-quadratic regulator with path
integral policy improvement into an existing adversarial IfO framework. The
result is a more data-efficient IfO algorithm with better performance, which we
show empirically in four simulation domains: using far fewer interactions with
the environment, the proposed method exhibits similar or better performance
than the existing technique. | [
"cs.LG",
"cs.AI"
] |
Visual question answering (VQA) is a task that combines both the techniques
of computer vision and natural language processing. It requires models to
answer a text-based question according to the information contained in a
visual. In recent years, the research field of VQA has been expanded. Research
that focuses on the VQA, examining the reasoning ability and VQA on scientific
diagrams, has also been explored more. Meanwhile, more multimodal feature
fusion mechanisms have been proposed. This paper will review and analyze
existing datasets, metrics, and models proposed for the VQA task. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The analysis of GPS trajectories is a well-studied problem in Urban Computing
and has been used to track people. Analyzing people mobility and identifying
the transportation mode used by them is essential for cities that want to
reduce traffic jams and travel time between their points, thus helping to
improve the quality of life of citizens. The trajectory data of a moving object
is represented by a discrete collection of points through time, i.e., a time
series. Regarding its interdisciplinary and broad scope of real-world
applications, it is evident the need of extracting knowledge from time series
data. Mining this type of data, however, faces several complexities due to its
unique properties. Different representations of data may overcome this. In this
work, we propose the use of a feature retained from the Ordinal Pattern
Transition Graph, called the probability of self-transition for transportation
mode classification. The proposed feature presents better accuracy results than
Permutation Entropy and Statistical Complexity, even when these two are
combined. This is the first work, to the best of our knowledge, that uses
Information Theory quantifiers to transportation mode classification, showing
that it is a feasible approach to this kind of problem. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
With the rise and development of deep learning, computer vision has been
tremendously transformed and reshaped. As an important research area in
computer vision, scene text detection and recognition has been inescapably
influenced by this wave of revolution, consequentially entering the era of deep
learning. In recent years, the community has witnessed substantial advancements
in mindset, approach and performance. This survey is aimed at summarizing and
analyzing the major changes and significant progresses of scene text detection
and recognition in the deep learning era. Through this article, we devote to:
(1) introduce new insights and ideas; (2) highlight recent techniques and
benchmarks; (3) look ahead into future trends. Specifically, we will emphasize
the dramatic differences brought by deep learning and the grand challenges
still remained. We expect that this review paper would serve as a reference
book for researchers in this field. Related resources are also collected and
compiled in our Github repository: https://github.com/Jyouhou/SceneTextPapers. | [
"cs.CV"
] |
Deep learning has become in recent years a cornerstone tool fueling key
innovations in the industry, such as autonomous driving. To attain good
performances, the neural network architecture used for a given application must
be chosen with care. These architectures are often handcrafted and therefore
prone to human biases and sub-optimal selection. Neural Architecture Search
(NAS) is a framework introduced to mitigate such risks by jointly optimizing
the network architectures and its weights. Albeit its novelty, it was applied
on complex tasks with significant results - e.g. semantic image segmentation.
In this technical paper, we aim to evaluate its ability to tackle a challenging
operational task: semantic segmentation of objects of interest in satellite
imagery. Designing a NAS framework is not trivial and has strong dependencies
to hardware constraints. We therefore motivate our NAS approach selection and
provide corresponding implementation details. We also present novel ideas to
carry out other such use-case studies. | [
"cs.CV",
"cs.NE"
] |
An automatic image segmentation procedure is an inevitable part of many image
analyses and computer vision which deeply affect the rest of the system;
therefore, a set of interactive segmentation evaluation methods can
substantially simplify the system development process. This entry presents the
state of the art of quantitative evaluation metrics for color image
segmentation methods by performing an analytical and comparative review of the
measures. The decision-making process in selecting a suitable evaluation metric
is still very serious because each metric tends to favor a different
segmentation method for each benchmark dataset. Furthermore, a conceptual
comparison of these metrics is provided at a high level of abstraction and is
discussed for understanding the quantitative changes in different image
segmentation results. | [
"cs.CV",
"cs.LG",
"cs.MM",
"eess.IV",
"I.4.6; I.2.10; I.5.0; I.3.0; E.0"
] |
We describe a procedure for explaining neurons in deep representations by
identifying compositional logical concepts that closely approximate neuron
behavior. Compared to prior work that uses atomic labels as explanations,
analyzing neurons compositionally allows us to more precisely and expressively
characterize their behavior. We use this procedure to answer several questions
on interpretability in models for vision and natural language processing.
First, we examine the kinds of abstractions learned by neurons. In image
classification, we find that many neurons learn highly abstract but
semantically coherent visual concepts, while other polysemantic neurons detect
multiple unrelated features; in natural language inference (NLI), neurons learn
shallow lexical heuristics from dataset biases. Second, we see whether
compositional explanations give us insight into model performance: vision
neurons that detect human-interpretable concepts are positively correlated with
task performance, while NLI neurons that fire for shallow heuristics are
negatively correlated with task performance. Finally, we show how compositional
explanations provide an accessible way for end users to produce simple
"copy-paste" adversarial examples that change model behavior in predictable
ways. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"stat.ML"
] |
Recently, Zero-shot Sketch-based Image Retrieval (ZS-SBIR) has attracted the
attention of the computer vision community due to it's real-world applications,
and the more realistic and challenging setting than found in SBIR. ZS-SBIR
inherits the main challenges of multiple computer vision problems including
content-based Image Retrieval (CBIR), zero-shot learning and domain adaptation.
The majority of previous studies using deep neural networks have achieved
improved results through either projecting sketch and images into a common
low-dimensional space or transferring knowledge from seen to unseen classes.
However, those approaches are trained with complex frameworks composed of
multiple deep convolutional neural networks (CNNs) and are dependent on
category-level word labels. This increases the requirements on training
resources and datasets. In comparison, we propose a simple and efficient
framework that does not require high computational training resources, and can
be trained on datasets without semantic categorical labels. Furthermore, at
training and inference stages our method only uses a single CNN. In this work,
a pre-trained ImageNet CNN (e.g., ResNet50) is fine-tuned with three proposed
learning objects: domain-aware quadruplet loss, semantic classification loss,
and semantic knowledge preservation loss. The domain-aware quadruplet and
semantic classification losses are introduced to learn discriminative, semantic
and domain invariant features through considering ZS-SBIR as object detection
and verification problem. ... | [
"cs.CV"
] |
Multiview stereo aims to reconstruct scene depth from images acquired by a
camera under arbitrary motion. Recent methods address this problem through deep
learning, which can utilize semantic cues to deal with challenges such as
textureless and reflective regions. In this paper, we present a convolutional
neural network called DPSNet (Deep Plane Sweep Network) whose design is
inspired by best practices of traditional geometry-based approaches for dense
depth reconstruction. Rather than directly estimating depth and/or optical flow
correspondence from image pairs as done in many previous deep learning methods,
DPSNet takes a plane sweep approach that involves building a cost volume from
deep features using the plane sweep algorithm, regularizing the cost volume via
a context-aware cost aggregation, and regressing the dense depth map from the
cost volume. The cost volume is constructed using a differentiable warping
process that allows for end-to-end training of the network. Through the
effective incorporation of conventional multiview stereo concepts within a deep
learning framework, DPSNet achieves state-of-the-art reconstruction results on
a variety of challenging datasets. | [
"cs.CV",
"cs.RO"
] |
Inspired by the fact that human eyes continue to develop tracking ability in
early and middle childhood, we propose to use tracking as a proxy task for a
computer vision system to learn the visual representations. Modelled on the
Catch game played by the children, we design a Catch-the-Patch (CtP) game for a
3D-CNN model to learn visual representations that would help with video-related
tasks. In the proposed pretraining framework, we cut an image patch from a
given video and let it scale and move according to a pre-set trajectory. The
proxy task is to estimate the position and size of the image patch in a
sequence of video frames, given only the target bounding box in the first
frame. We discover that using multiple image patches simultaneously brings
clear benefits. We further increase the difficulty of the game by randomly
making patches invisible. Extensive experiments on mainstream benchmarks
demonstrate the superior performance of CtP against other video pretraining
methods. In addition, CtP-pretrained features are less sensitive to domain gaps
than those trained by a supervised action recognition task. When both trained
on Kinetics-400, we are pleasantly surprised to find that CtP-pretrained
representation achieves much higher action classification accuracy than its
fully supervised counterpart on Something-Something dataset. Code is available
online: github.com/microsoft/CtP. | [
"cs.CV"
] |
In this paper, we show how uncertainty estimation can be leveraged to enable
safety critical image segmentation in autonomous driving, by triggering a
fallback behavior if a target accuracy cannot be guaranteed. We introduce a new
uncertainty measure based on disagreeing predictions as measured by a
dissimilarity function. We propose to estimate this dissimilarity by training a
deep neural architecture in parallel to the task-specific network. It allows
this observer to be dedicated to the uncertainty estimation, and let the
task-specific network make predictions. We propose to use self-supervision to
train the observer, which implies that our method does not require additional
training data. We show experimentally that our proposed approach is much less
computationally intensive at inference time than competing methods (e.g.
MCDropout), while delivering better results on safety-oriented evaluation
metrics on the CamVid dataset, especially in the case of glare artifacts. | [
"cs.CV"
] |
On-device Deep Neural Networks (DNNs) have recently gained more attention due
to the increasing computing power of the mobile devices and the number of
applications in Computer Vision (CV), Natural Language Processing (NLP), and
Internet of Things (IoTs). Unfortunately, the existing efficient convolutional
neural network (CNN) architectures designed for CV tasks are not directly
applicable to NLP tasks and the tiny Recurrent Neural Network (RNN)
architectures have been designed primarily for IoT applications. In NLP
applications, although model compression has seen initial success in on-device
text classification, there are at least three major challenges yet to be
addressed: adversarial robustness, explainability, and personalization. Here we
attempt to tackle these challenges by designing a new training scheme for model
compression and adversarial robustness, including the optimization of an
explainable feature mapping objective, a knowledge distillation objective, and
an adversarially robustness objective. The resulting compressed model is
personalized using on-device private training data via fine-tuning. We perform
extensive experiments to compare our approach with both compact RNN (e.g.,
FastGRNN) and compressed RNN (e.g., PRADO) architectures in both natural and
adversarial NLP test settings. | [
"cs.LG"
] |
Object goal navigation aims to steer an agent towards a target object based
on observations of the agent. It is of pivotal importance to design effective
visual representations of the observed scene in determining navigation actions.
In this paper, we introduce a Visual Transformer Network (VTNet) for learning
informative visual representation in navigation. VTNet is a highly effective
structure that embodies two key properties for visual representations: First,
the relationships among all the object instances in a scene are exploited;
Second, the spatial locations of objects and image regions are emphasized so
that directional navigation signals can be learned. Furthermore, we also
develop a pre-training scheme to associate the visual representations with
navigation signals, and thus facilitate navigation policy learning. In a
nutshell, VTNet embeds object and region features with their location cues as
spatial-aware descriptors and then incorporates all the encoded descriptors
through attention operations to achieve informative representation for
navigation. Given such visual representations, agents are able to explore the
correlations between visual observations and navigation actions. For example,
an agent would prioritize "turning right" over "turning left" when the visual
representation emphasizes on the right side of activation map. Experiments in
the artificial environment AI2-Thor demonstrate that VTNet significantly
outperforms state-of-the-art methods in unseen testing environments. | [
"cs.CV"
] |
Personality computing and affective computing, where the recognition of
personality traits is essential, have gained increasing interest and attention
in many research areas recently. We propose a novel approach to recognize the
Big Five personality traits of people from videos. Personality and emotion
affect the speaking style, facial expressions, body movements, and linguistic
factors in social contexts, and they are affected by environmental elements. We
develop a multimodal system to recognize apparent personality based on various
modalities such as the face, environment, audio, and transcription features. We
use modality-specific neural networks that learn to recognize the traits
independently and we obtain a final prediction of apparent personality with a
feature-level fusion of these networks. We employ pre-trained deep
convolutional neural networks such as ResNet and VGGish networks to extract
high-level features and Long Short-Term Memory networks to integrate temporal
information. We train the large model consisting of modality-specific
subnetworks using a two-stage training process. We first train the subnetworks
separately and then fine-tune the overall model using these trained networks.
We evaluate the proposed method using ChaLearn First Impressions V2 challenge
dataset. Our approach obtains the best overall "mean accuracy" score, averaged
over five personality traits, compared to the state-of-the-art. | [
"cs.CV"
] |
Facial expression recognition has been an active area in computer vision with
application areas including animation, social robots, personalized banking,
etc. In this study, we explore the problem of image classification for
detecting facial expressions based on features extracted from pre-trained
convolutional neural networks trained on ImageNet database. Features are
extracted and transferred to a Linear Support Vector Machine for
classification. All experiments are performed on two publicly available
datasets such as JAFFE and CK+ database. The results show that representations
learned from pre-trained networks for a task such as object recognition can be
transferred, and used for facial expression recognition. Furthermore, for a
small dataset, using features from earlier layers of the VGG19 network provides
better classification accuracy. Accuracies of 92.26% and 92.86% were achieved
for the CK+ and JAFFE datasets respectively. | [
"cs.CV"
] |
A recent line of work showed that various forms of convolutional kernel
methods can be competitive with standard supervised deep convolutional networks
on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while
being more amenable to theoretical analysis. In this work, we highlight the
importance of a data-dependent feature extraction step that is key to the
obtain good performance in convolutional kernel methods. This step typically
corresponds to a whitened dictionary of patches, and gives rise to a
data-driven convolutional kernel methods. We extensively study its effect,
demonstrating it is the key ingredient for high performance of these methods.
Specifically, we show that one of the simplest instances of such kernel
methods, based on a single layer of image patches followed by a linear
classifier is already obtaining classification accuracies on CIFAR-10 in the
same range as previous more sophisticated convolutional kernel methods. We
scale this method to the challenging ImageNet dataset, showing such a simple
approach can exceed all existing non-learned representation methods. This is a
new baseline for object recognition without representation learning methods,
that initiates the investigation of convolutional kernel models on ImageNet. We
conduct experiments to analyze the dictionary that we used, our ablations
showing they exhibit low-dimensional properties. | [
"cs.CV",
"cs.LG"
] |
Despite the attention marker-less pose estimation has attracted in recent
years, marker-based approaches still provide unbeatable accuracy under
controlled environmental conditions. Thus, they are used in many fields such as
robotics or biomedical applications but are primarily implemented through
classical approaches, which require lots of heuristics and parameter tuning for
reliable performance under different environments. In this work, we propose
MarkerPose, a robust, real-time pose estimation system based on a planar target
of three circles and a stereo vision system. MarkerPose is meant for
high-accuracy pose estimation applications. Our method consists of two deep
neural networks for marker point detection. A SuperPoint-like network for
pixel-level accuracy keypoint localization and classification, and we introduce
EllipSegNet, a lightweight ellipse segmentation network for sub-pixel-level
accuracy keypoint detection. The marker's pose is estimated through stereo
triangulation. The target point detection is robust to low lighting and motion
blur conditions. We compared MarkerPose with a detection method based on
classical computer vision techniques using a robotic arm for validation. The
results show our method provides better accuracy than the classical technique.
Finally, we demonstrate the suitability of MarkerPose in a 3D freehand
ultrasound system, which is an application where highly accurate pose
estimation is required. Code is available in Python and C++ at
https://github.com/jhacsonmeza/MarkerPose. | [
"cs.CV"
] |
Although many spectral unmixing models have been developed to address
spectral variability caused by variable incident illuminations, the mechanism
of the spectral variability is still unclear. This paper proposes an unmixing
model, named illumination invariant spectral unmixing (IISU). IISU makes the
first attempt to use the radiance hyperspectral data and a LiDAR-derived
digital surface model (DSM) in order to physically explain variable
illuminations and shadows in the unmixing framework. Incident angles, sky
factors, visibility from the sun derived from the LiDAR-derived DSM support the
explicit explanation of endmember variability in the unmixing process from
radiance perspective. The proposed model was efficiently solved by a
straightforward optimization procedure. The unmixing results showed that the
other state-of-the-art unmixing models did not work well especially in the
shaded pixels. On the other hand, the proposed model estimated more accurate
abundances and shadow compensated reflectance than the existing models. | [
"cs.CV",
"eess.IV"
] |
We propose a generative model for single-channel EEG that incorporates the
constraints experts actively enforce during visual scoring. The framework takes
the form of a dynamic Bayesian network with depth in both the latent variables
and the observation likelihoods-while the hidden variables control the
durations, state transitions, and robustness, the observation architectures
parameterize Normal-Gamma distributions. The resulting model allows for time
series segmentation into local, reoccurring dynamical regimes by exploiting
probabilistic models and deep learning. Unlike typical detectors, our model
takes the raw data (up to resampling) without pre-processing (e.g., filtering,
windowing, thresholding) or post-processing (e.g., event merging). This not
only makes the model appealing to real-time applications, but it also yields
interpretable hyperparameters that are analogous to known clinical criteria. We
derive algorithms for exact, tractable inference as a special case of
Generalized Expectation Maximization via dynamic programming and
backpropagation. We validate the model on three public datasets and provide
support that more complex models are able to surpass state-of-the-art detectors
while being transparent, auditable, and generalizable. | [
"cs.LG",
"eess.SP"
] |
The success of deep neural networks often relies on a large amount of labeled
examples, which can be difficult to obtain in many real scenarios. To address
this challenge, unsupervised methods are strongly preferred for training neural
networks without using any labeled data. In this paper, we present a novel
paradigm of unsupervised representation learning by Auto-Encoding
Transformation (AET) in contrast to the conventional Auto-Encoding Data (AED)
approach. Given a randomly sampled transformation, AET seeks to predict it
merely from the encoded features as accurately as possible at the output end.
The idea is the following: as long as the unsupervised features successfully
encode the essential information about the visual structures of original and
transformed images, the transformation can be well predicted. We will show that
this AET paradigm allows us to instantiate a large variety of transformations,
from parameterized, to non-parameterized and GAN-induced ones. Our experiments
show that AET greatly improves over existing unsupervised approaches, setting
new state-of-the-art performances being greatly closer to the upper bounds by
their fully supervised counterparts on CIFAR-10, ImageNet and Places datasets. | [
"cs.CV"
] |
Reliable curb detection is critical for safe autonomous driving in urban
contexts. Curb detection and tracking are also useful in vehicle localization
and path planning. Past work utilized a 3D LiDAR sensor to determine accurate
distance information and the geometric attributes of curbs. However, such an
approach requires dense point cloud data and is also vulnerable to false
positives from obstacles present on both road and off-road areas. In this
paper, we propose an approach to detect and track curbs by fusing together data
from multiple sensors: sparse LiDAR data, a mono camera and low-cost ultrasonic
sensors. The detection algorithm is based on a single 3D LiDAR and a mono
camera sensor used to detect candidate curb features and it effectively removes
false positives arising from surrounding static and moving obstacles. The
detection accuracy of the tracking algorithm is boosted by using Kalman
filter-based prediction and fusion with lateral distance information from
low-cost ultrasonic sensors. We next propose a line-fitting algorithm that
yields robust results for curb locations. Finally, we demonstrate the practical
feasibility of our solution by testing in different road environments and
evaluating our implementation in a real vehicle\footnote{Demo video clips
demonstrating our algorithm have been uploaded to Youtube:
https://www.youtube.com/watch?v=w5MwsdWhcy4,
https://www.youtube.com/watch?v=Gd506RklfG8.}. Our algorithm maintains over
90\% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and
our dataset respectively, and its average processing time per frame is
approximately 10 ms on Intel i7 x86 and 100ms on NVIDIA Xavier board. | [
"cs.CV",
"cs.AI",
"cs.RO",
"eess.SP"
] |
Dimensionality reduction (DR) on the manifold includes effective methods
which project the data from an implicit relational space onto a vectorial
space. Regardless of the achievements in this area, these algorithms suffer
from the lack of interpretation of the projection dimensions. Therefore, it is
often difficult to explain the physical meaning behind the embedding
dimensions. In this research, we propose the interpretable kernel DR algorithm
(I-KDR) as a new algorithm which maps the data from the feature space to a
lower dimensional space where the classes are more condensed with less
overlapping. Besides, the algorithm creates the dimensions upon local
contributions of the data samples, which makes it easier to interpret them by
class labels. Additionally, we efficiently fuse the DR with feature selection
task to select the most relevant features of the original space to the
discriminative objective. Based on the empirical evidence, I-KDR provides
better interpretations for embedding dimensions as well as higher
discriminative performance in the embedded space compared to the
state-of-the-art and popular DR algorithms. | [
"cs.LG",
"stat.ML"
] |
Electroencephalographic (EEG) monitoring of neural activity is widely used
for sleep disorder diagnostics and research. The standard of care is to
manually classify 30-second epochs of EEG time-domain traces into 5 discrete
sleep stages. Unfortunately, this scoring process is subjective and
time-consuming, and the defined stages do not capture the heterogeneous
landscape of healthy and clinical neural dynamics. This motivates the search
for a data-driven and principled way to identify the number and composition of
salient, reoccurring brain states present during sleep. To this end, we propose
a Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM), combined with
wide-sense stationary (WSS) time series spectral estimation to construct a
generative model for personalized subject sleep states. In addition, we employ
multitaper spectral estimation to further reduce the large variance of the
spectral estimates inherent to finite-length EEG measurements. By applying our
method to both simulated and human sleep data, we arrive at three main results:
1) a Bayesian nonparametric automated algorithm that recovers general temporal
dynamics of sleep, 2) identification of subject-specific "microstates" within
canonical sleep stages, and 3) discovery of stage-dependent sub-oscillations
with shared spectral signatures across subjects. | [
"stat.ML",
"cs.LG",
"eess.SP",
"stat.AP"
] |
We propose a new contextual-compositional neural network layer that handles
out-of-vocabulary (OOV) words in natural language processing (NLP) tagging
tasks. This layer consists of a model that attends to both the character
sequence and the context in which the OOV words appear. We show that our model
learns to generate task-specific \textit{and} sentence-dependent OOV word
representations without the need for pre-training on an embedding table, unlike
previous attempts. We insert our layer in the state-of-the-art tagging model of
\citet{plank2016multilingual} and thoroughly evaluate its contribution on 23
different languages on the task of jointly tagging part-of-speech and
morphosyntactic attributes. Our OOV handling method successfully improves
performances of this model on every language but one to achieve a new
state-of-the-art on the Universal Dependencies Dataset 1.4. | [
"cs.LG",
"stat.ML"
] |
Referring Expression Comprehension (REC) has become one of the most important
tasks in visual reasoning, since it is an essential step for many
vision-and-language tasks such as visual question answering. However, it has
not been widely used in many downstream tasks because it suffers 1) two-stage
methods exist heavy computation cost and inevitable error accumulation, and 2)
one-stage methods have to depend on lots of hyper-parameters (such as anchors)
to generate bounding box. In this paper, we present a proposal-free one-stage
(PFOS) model that is able to regress the region-of-interest from the image,
based on a textual query, in an end-to-end manner. Instead of using the
dominant anchor proposal fashion, we directly take the dense-grid of an image
as input for a cross-attention transformer that learns grid-word
correspondences. The final bounding box is predicted directly from the image
without the time-consuming anchor selection process that previous methods
suffer. Our model achieves the state-of-the-art performance on four referring
expression datasets with higher efficiency, comparing to previous best
one-stage and two-stage methods. | [
"cs.CV"
] |
In this paper, we propose deep learning algorithms for ranking response
surfaces, with applications to optimal stopping problems in financial
mathematics. The problem of ranking response surfaces is motivated by
estimating optimal feedback policy maps in stochastic control problems, aiming
to efficiently find the index associated to the minimal response across the
entire continuous input space $\mathcal{X} \subseteq \mathbb{R}^d$. By
considering points in $\mathcal{X}$ as pixels and indices of the minimal
surfaces as labels, we recast the problem as an image segmentation problem,
which assigns a label to every pixel in an image such that pixels with the same
label share certain characteristics. This provides an alternative method for
efficiently solving the problem instead of using sequential design in our
previous work [R. Hu and M. Ludkovski, SIAM/ASA Journal on Uncertainty
Quantification, 5 (2017), 212--239].
Deep learning algorithms are scalable, parallel and model-free, i.e., no
parametric assumptions needed on the response surfaces. Considering ranking
response surfaces as image segmentation allows one to use a broad class of deep
neural networks, e.g., UNet, SegNet, DeconvNet, which have been widely applied
and numerically proved to possess high accuracy in the field. We also
systematically study the dependence of deep learning algorithms on the input
data generated on uniform grids or by sequential design sampling, and observe
that the performance of deep learning is {\it not} sensitive to the noise and
locations (close to/away from boundaries) of training data. We present a few
examples including synthetic ones and the Bermudan option pricing problem to
show the efficiency and accuracy of this method. | [
"stat.ML",
"cs.LG",
"q-fin.CP",
"60G40, 65C60, 68T99"
] |
The classical development of neural networks has primarily focused on
learning mappings between finite dimensional Euclidean spaces or finite sets.
We propose a generalization of neural networks tailored to learn operators
mapping between infinite dimensional function spaces. We formulate the
approximation of operators by composition of a class of linear integral
operators and nonlinear activation functions, so that the composed operator can
approximate complex nonlinear operators. We prove a universal approximation
theorem for our construction. Furthermore, we introduce four classes of
operator parameterizations: graph-based operators, low-rank operators,
multipole graph-based operators, and Fourier operators and describe efficient
algorithms for computing with each one. The proposed neural operators are
resolution-invariant: they share the same network parameters between different
discretizations of the underlying function spaces and can be used for zero-shot
super-resolutions. Numerically, the proposed models show superior performance
compared to existing machine learning based methodologies on Burgers' equation,
Darcy flow, and the Navier-Stokes equation, while being several order of
magnitude faster compared to conventional PDE solvers. | [
"cs.LG",
"cs.NA",
"math.NA"
] |
Large amounts of labeled data are typically required to train deep learning
models. For many real-world problems, however, acquiring additional data can be
expensive or even impossible. We present semi-supervised deep kernel learning
(SSDKL), a semi-supervised regression model based on minimizing predictive
variance in the posterior regularization framework. SSDKL combines the
hierarchical representation learning of neural networks with the probabilistic
modeling capabilities of Gaussian processes. By leveraging unlabeled data, we
show improvements on a diverse set of real-world regression tasks over
supervised deep kernel learning and semi-supervised methods such as VAT and
mean teacher adapted for regression. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We initially proposed a deep learning approach for foreign objects inpainting
in smartphone-camera captured chest radiographs utilizing the cheXphoto
dataset. Foreign objects which can significantly affect the quality of a
computer-aided diagnostic prediction are captured under various settings. In
this paper, we used multi-method to tackle both removal and inpainting chest
radiographs. Firstly, an object detection model is trained to separate the
foreign objects from the given image. Subsequently, the binary mask of each
object is extracted utilizing a segmentation model. Each pair of the binary
mask and the extracted object are then used for inpainting purposes. Finally,
the in-painted regions are now merged back to the original image, resulting in
a clean and non-foreign-object-existing output. To conclude, we achieved
state-of-the-art accuracy. The experimental results showed a new approach to
the possible applications of this method for chest X-ray images detection. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Over the last few decades, artificial intelligence research has made
tremendous strides, but it still heavily relies on fixed datasets in stationary
environments. Continual learning is a growing field of research that examines
how AI systems can learn sequentially from a continuous stream of linked data
in the same way that biological systems do. Simultaneously, fake media such as
deepfakes and synthetic face images have emerged as significant to current
multimedia technologies. Recently, numerous method has been proposed which can
detect deepfakes with high accuracy. However, they suffer significantly due to
their reliance on fixed datasets in limited evaluation settings. Therefore, in
this work, we apply continuous learning to neural networks' learning dynamics,
emphasizing its potential to increase data efficiency significantly. We propose
Continual Representation using Distillation (CoReD) method that employs the
concept of Continual Learning (CL), Representation Learning (RL), and Knowledge
Distillation (KD). We design CoReD to perform sequential domain adaptation
tasks on new deepfake and GAN-generated synthetic face datasets, while
effectively minimizing the catastrophic forgetting in a teacher-student model
setting. Our extensive experimental results demonstrate that our method is
efficient at domain adaptation to detect low-quality deepfakes videos and
GAN-generated images from several datasets, outperforming the-state-of-art
baseline methods. | [
"cs.CV",
"cs.CR",
"cs.LG",
"cs.MM",
"I.4.9; I.5.4"
] |
Object detection has seen tremendous progress in recent years. However,
current algorithms don't generalize well when tested on diverse data
distributions. We address the problem of incremental learning in object
detection on the India Driving Dataset (IDD). Our approach involves using
multiple domain-specific classifiers and effective transfer learning techniques
focussed on avoiding catastrophic forgetting. We evaluate our approach on the
IDD and BDD100K dataset. Results show the effectiveness of our domain adaptive
approach in the case of domain shifts in environments. | [
"cs.CV"
] |
State-of-the-art results on image recognition tasks are achieved using
over-parameterized learning algorithms that (nearly) perfectly fit the training
set and are known to fit well even random labels. This tendency to memorize the
labels of the training data is not explained by existing theoretical analyses.
Memorization of the training data also presents significant privacy risks when
the training data contains sensitive personal information and thus it is
important to understand whether such memorization is necessary for accurate
learning.
We provide the first conceptual explanation and a theoretical model for this
phenomenon. Specifically, we demonstrate that for natural data distributions
memorization of labels is necessary for achieving close-to-optimal
generalization error. Crucially, even labels of outliers and noisy labels need
to be memorized. The model is motivated and supported by the results of several
recent empirical works. In our model, data is sampled from a mixture of
subpopulations and our results show that memorization is necessary whenever the
distribution of subpopulation frequencies is long-tailed. Image and text data
is known to be long-tailed and therefore our results establish a formal link
between these empirical phenomena. Our results allow to quantify the cost of
limiting memorization in learning and explain the disparate effects that
privacy and model compression have on different subgroups. | [
"cs.LG",
"stat.ML"
] |
The outbreak of the coronavirus disease 2019 (COVID-19) has now spread
throughout the globe infecting over 150 million people and causing the death of
over 3.2 million people. Thus, there is an urgent need to study the dynamics of
epidemiological models to gain a better understanding of how such diseases
spread. While epidemiological models can be computationally expensive, recent
advances in machine learning techniques have given rise to neural networks with
the ability to learn and predict complex dynamics at reduced computational
costs. Here we introduce two digital twins of a SEIRS model applied to an
idealised town. The SEIRS model has been modified to take account of spatial
variation and, where possible, the model parameters are based on official virus
spreading data from the UK. We compare predictions from a data-corrected
Bidirectional Long Short-Term Memory network and a predictive Generative
Adversarial Network. The predictions given by these two frameworks are accurate
when compared to the original SEIRS model data.
Additionally, these frameworks are data-agnostic and could be applied to
towns, idealised or real, in the UK or in other countries. Also, more
compartments could be included in the SEIRS model, in order to study more
realistic epidemiological behaviour. | [
"cs.LG",
"physics.soc-ph"
] |
Occlusion and pose variations, which can change facial appearance
significantly, are two major obstacles for automatic Facial Expression
Recognition (FER). Though automatic FER has made substantial progresses in the
past few decades, occlusion-robust and pose-invariant issues of FER have
received relatively less attention, especially in real-world scenarios. This
paper addresses the real-world pose and occlusion robust FER problem with
three-fold contributions. First, to stimulate the research of FER under
real-world occlusions and variant poses, we build several in-the-wild facial
expression datasets with manual annotations for the community. Second, we
propose a novel Region Attention Network (RAN), to adaptively capture the
importance of facial regions for occlusion and pose variant FER. The RAN
aggregates and embeds varied number of region features produced by a backbone
convolutional neural network into a compact fixed-length representation. Last,
inspired by the fact that facial expressions are mainly defined by facial
action units, we propose a region biased loss to encourage high attention
weights for the most important regions. We validate our RAN and region biased
loss on both our built test datasets and four popular datasets: FERPlus,
AffectNet, RAF-DB, and SFEW. Extensive experiments show that our RAN and region
biased loss largely improve the performance of FER with occlusion and variant
pose. Our method also achieves state-of-the-art results on FERPlus, AffectNet,
RAF-DB, and SFEW. Code and the collected test data will be publicly available. | [
"cs.CV"
] |
Non-local self similarity (NSS) is a powerful prior of natural images for
image denoising. Most of existing denoising methods employ similar patches,
which is a patch-level NSS prior. In this paper, we take one step forward by
introducing a pixel-level NSS prior, i.e., searching similar pixels across a
non-local region. This is motivated by the fact that finding closely similar
pixels is more feasible than similar patches in natural images, which can be
used to enhance image denoising performance. With the introduced pixel-level
NSS prior, we propose an accurate noise level estimation method, and then
develop a blind image denoising method based on the lifting Haar transform and
Wiener filtering techniques. Experiments on benchmark datasets demonstrate
that, the proposed method achieves much better performance than previous
non-deep methods, and is still competitive with existing state-of-the-art deep
learning based methods on real-world image denoising. The code is publicly
available at https://github.com/njusthyk1972/NLH. | [
"cs.CV"
] |
In this paper we present several architectural and optimization recipes for
generative adversarial network(GAN) based facial semantic inpainting. Current
benchmark models are susceptible to initial solutions of non-convex
optimization criterion of GAN based inpainting. We present an end-to-end
trainable parametric network to deterministically start from good initial
solutions leading to more photo realistic reconstructions with significant
optimization speed up. For the first time, we show how to efficiently extend
GAN based single image inpainter models to sequences by a)learning to
initialize a temporal window of solutions with a recurrent neural network and
b)imposing a temporal smoothness loss(during iterative optimization) to respect
the redundancy in temporal dimension of a sequence. We conduct comprehensive
empirical evaluations on CelebA images and pseudo sequences followed by real
life videos of VidTIMIT dataset. The proposed method significantly outperforms
current GAN based state-of-the-art in terms of reconstruction quality with a
simultaneous speedup of over 15$\times$. We also show that our proposed model
is better in preserving facial identity in a sequence even without explicitly
using any face recognition module during training. | [
"cs.CV"
] |
The research on human emotion under multimedia stimulation based on
physiological signals is an emerging field, and important progress has been
achieved for emotion recognition based on multi-modal signals. However, it is
challenging to make full use of the complementarity among
spatial-spectral-temporal domain features for emotion recognition, as well as
model the heterogeneity and correlation among multi-modal signals. In this
paper, we propose a novel two-stream heterogeneous graph recurrent neural
network, named HetEmotionNet, fusing multi-modal physiological signals for
emotion recognition. Specifically, HetEmotionNet consists of the
spatial-temporal stream and the spatial-spectral stream, which can fuse
spatial-spectral-temporal domain features in a unified framework. Each stream
is composed of the graph transformer network for modeling the heterogeneity,
the graph convolutional network for modeling the correlation, and the gated
recurrent unit for capturing the temporal domain or spectral domain dependency.
Extensive experiments on two real-world datasets demonstrate that our proposed
model achieves better performance than state-of-the-art baselines. | [
"cs.LG",
"cs.AI",
"cs.HC",
"cs.MM"
] |
We propose augmenting deep neural networks with an attention mechanism for
the visual object detection task. As perceiving a scene, humans have the
capability of multiple fixation points, each attended to scene content at
different locations and scales. However, such a mechanism is missing in the
current state-of-the-art visual object detection methods. Inspired by the human
vision system, we propose a novel deep network architecture that imitates this
attention mechanism. As detecting objects in an image, the network adaptively
places a sequence of glimpses of different shapes at different locations in the
image. Evidences of the presence of an object and its location are extracted
from these glimpses, which are then fused for estimating the object class and
bounding box coordinates. Due to lacks of ground truth annotations of the
visual attention mechanism, we train our network using a reinforcement learning
algorithm with policy gradients. Experiment results on standard object
detection benchmarks show that the proposed network consistently outperforms
the baseline networks that does not model the attention mechanism. | [
"cs.CV"
] |