id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.06383 | Soumendu Chakraborty | Soumendu Chakraborty, and Anand Singh Jalal | A Novel Local Binary Pattern Based Blind Feature Image Steganography | null | Multimedia Tools and Applications, vol-79, no-27-28, pp.
19561-19574, 2020 | 10.1007/s11042-020-08828-3 | null | cs.MM cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Steganography methods in general terms tend to embed more and more secret
bits in the cover images. Most of these methods are designed to embed secret
information in such a way that the change in the visual quality of the
resulting stego image is not detectable. There exists some methods which
preserve the global structure of the cover after embedding. However, the
embedding capacity of these methods is very less. In this paper a novel feature
based blind image steganography technique is proposed, which preserves the LBP
(Local binary pattern) feature of the cover with comparable embedding rates.
Local binary pattern is a well known image descriptor used for image
representation. The proposed scheme computes the local binary pattern to hide
the bits of the secret image in such a way that the local relationship that
exists in the cover are preserved in the resulting stego image. The performance
of the proposed steganography method has been tested on several images of
different types to show the robustness. State of the art LSB based
steganography methods are compared with the proposed method to show the
effectiveness of feature based image steganography
| [
{
"created": "Sat, 16 Jan 2021 06:37:00 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Chakraborty",
"Soumendu",
""
],
[
"Jalal",
"Anand Singh",
""
]
] |
2101.06395 | Shuo Yang | Shuo Yang, Lu Liu, Min Xu | Free Lunch for Few-shot Learning: Distribution Calibration | ICLR 2021 | The 9th International Conference on Learning Representations (ICLR
2021) | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Learning from a limited number of samples is challenging since the learned
model can easily become overfitted based on the biased distribution formed by
only a few training examples. In this paper, we calibrate the distribution of
these few-sample classes by transferring statistics from the classes with
sufficient examples, then an adequate number of examples can be sampled from
the calibrated distribution to expand the inputs to the classifier. We assume
every dimension in the feature representation follows a Gaussian distribution
so that the mean and the variance of the distribution can borrow from that of
similar classes whose statistics are better estimated with an adequate number
of samples. Our method can be built on top of off-the-shelf pretrained feature
extractors and classification models without extra parameters. We show that a
simple logistic regression classifier trained using the features sampled from
our calibrated distribution can outperform the state-of-the-art accuracy on two
datasets (~5% improvement on miniImageNet compared to the next best). The
visualization of these generated features demonstrates that our calibrated
distribution is an accurate estimation.
| [
{
"created": "Sat, 16 Jan 2021 07:58:40 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Mar 2021 08:34:18 GMT",
"version": "v2"
},
{
"created": "Sun, 15 Aug 2021 04:44:18 GMT",
"version": "v3"
}
] | 2021-08-17 | [
[
"Yang",
"Shuo",
""
],
[
"Liu",
"Lu",
""
],
[
"Xu",
"Min",
""
]
] |
2101.06560 | James Tu | James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye
Ren, Raquel Urtasun | Adversarial Attacks On Multi-Agent Communication | null | International Conference On Computer Vision 2021 | null | null | cs.LG cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Growing at a fast pace, modern autonomous systems will soon be deployed at
scale, opening up the possibility for cooperative multi-agent systems. Sharing
information and distributing workloads allow autonomous agents to better
perform tasks and increase computation efficiency. However, shared information
can be modified to execute adversarial attacks on deep learning models that are
widely employed in modern systems. Thus, we aim to study the robustness of such
systems and focus on exploring adversarial attacks in a novel multi-agent
setting where communication is done through sharing learned intermediate
representations of neural networks. We observe that an indistinguishable
adversarial message can severely degrade performance, but becomes weaker as the
number of benign agents increases. Furthermore, we show that black-box transfer
attacks are more difficult in this setting when compared to directly perturbing
the inputs, as it is necessary to align the distribution of learned
representations with domain adaptation. Our work studies robustness at the
neural network level to contribute an additional layer of fault tolerance to
modern security protocols for more secure multi-agent systems.
| [
{
"created": "Sun, 17 Jan 2021 00:35:26 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Oct 2021 15:56:07 GMT",
"version": "v2"
}
] | 2021-10-13 | [
[
"Tu",
"James",
""
],
[
"Wang",
"Tsunhsuan",
""
],
[
"Wang",
"Jingkang",
""
],
[
"Manivasagam",
"Sivabalan",
""
],
[
"Ren",
"Mengye",
""
],
[
"Urtasun",
"Raquel",
""
]
] |
2101.06562 | Ioan Andrei B\^arsan | Anqi Joyce Yang, Can Cui, Ioan Andrei B\^arsan, Raquel Urtasun,
Shenlong Wang | Asynchronous Multi-View SLAM | 25 pages, 23 figures, 13 tables | Published at ICRA 2021 | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing multi-camera SLAM systems assume synchronized shutters for all
cameras, which is often not the case in practice. In this work, we propose a
generalized multi-camera SLAM formulation which accounts for asynchronous
sensor observations. Our framework integrates a continuous-time motion model to
relate information across asynchronous multi-frames during tracking, local
mapping, and loop closing. For evaluation, we collected AMV-Bench, a
challenging new SLAM dataset covering 482 km of driving recorded using our
asynchronous multi-camera robotic platform. AMV-Bench is over an order of
magnitude larger than previous multi-view HD outdoor SLAM datasets, and covers
diverse and challenging motions and environments. Our experiments emphasize the
necessity of asynchronous sensor modeling, and show that the use of multiple
cameras is critical towards robust and accurate SLAM in challenging outdoor
scenes. For additional information, please see the project website at:
https://www.cs.toronto.edu/~ajyang/amv-slam
| [
{
"created": "Sun, 17 Jan 2021 00:50:01 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Apr 2021 01:42:54 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jul 2021 00:48:52 GMT",
"version": "v3"
}
] | 2021-07-16 | [
[
"Yang",
"Anqi Joyce",
""
],
[
"Cui",
"Can",
""
],
[
"Bârsan",
"Ioan Andrei",
""
],
[
"Urtasun",
"Raquel",
""
],
[
"Wang",
"Shenlong",
""
]
] |
2101.06634 | Ardhendu Behera | Ardhendu Behera, Zachary Wharton, Morteza Ghahremani, Swagat Kumar,
Nik Bessis | Regional Attention Network (RAN) for Head Pose and Fine-grained Gesture
Recognition | This manuscript is the accepted version of the published paper in
IEEE Transaction on Affective Computing | IEEE Transaction on Affective Computing 2020 | 10.1109/TAFFC.2020.3031841 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Affect is often expressed via non-verbal body language such as
actions/gestures, which are vital indicators for human behaviors. Recent
studies on recognition of fine-grained actions/gestures in monocular images
have mainly focused on modeling spatial configuration of body parts
representing body pose, human-objects interactions and variations in local
appearance. The results show that this is a brittle approach since it relies on
accurate body parts/objects detection. In this work, we argue that there exist
local discriminative semantic regions, whose "informativeness" can be evaluated
by the attention mechanism for inferring fine-grained gestures/actions. To this
end, we propose a novel end-to-end \textbf{Regional Attention Network (RAN)},
which is a fully Convolutional Neural Network (CNN) to combine multiple
contextual regions through attention mechanism, focusing on parts of the images
that are most relevant to a given task. Our regions consist of one or more
consecutive cells and are adapted from the strategies used in computing HOG
(Histogram of Oriented Gradient) descriptor. The model is extensively evaluated
on ten datasets belonging to 3 different scenarios: 1) head pose recognition,
2) drivers state recognition, and 3) human action and facial expression
recognition. The proposed approach outperforms the state-of-the-art by a
considerable margin in different metrics.
| [
{
"created": "Sun, 17 Jan 2021 10:14:28 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Behera",
"Ardhendu",
""
],
[
"Wharton",
"Zachary",
""
],
[
"Ghahremani",
"Morteza",
""
],
[
"Kumar",
"Swagat",
""
],
[
"Bessis",
"Nik",
""
]
] |
2101.06635 | Ardhendu Behera | Ardhendu Behera, Zachary Wharton, Pradeep Hewage, Asish Bera | Context-aware Attentional Pooling (CAP) for Fine-grained Visual
Classification | Extended version of the accepted paper in 35th AAAI Conference on
Artificial Intelligence 2021 | 35th AAAI Conference on Artificial Intelligence 2021 | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep convolutional neural networks (CNNs) have shown a strong ability in
mining discriminative object pose and parts information for image recognition.
For fine-grained recognition, context-aware rich feature representation of
object/scene plays a key role since it exhibits a significant variance in the
same subcategory and subtle variance among different subcategories. Finding the
subtle variance that fully characterizes the object/scene is not
straightforward. To address this, we propose a novel context-aware attentional
pooling (CAP) that effectively captures subtle changes via sub-pixel gradients,
and learns to attend informative integral regions and their importance in
discriminating different subcategories without requiring the bounding-box
and/or distinguishable part annotations. We also introduce a novel feature
encoding by considering the intrinsic consistency between the informativeness
of the integral regions and their spatial structures to capture the semantic
correlation among them. Our approach is simple yet extremely effective and can
be easily applied on top of a standard classification backbone network. We
evaluate our approach using six state-of-the-art (SotA) backbone networks and
eight benchmark datasets. Our method significantly outperforms the SotA
approaches on six datasets and is very competitive with the remaining two.
| [
{
"created": "Sun, 17 Jan 2021 10:15:02 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Behera",
"Ardhendu",
""
],
[
"Wharton",
"Zachary",
""
],
[
"Hewage",
"Pradeep",
""
],
[
"Bera",
"Asish",
""
]
] |
2101.06636 | Ardhendu Behera | Zachary Wharton, Ardhendu Behera, Yonghuai Liu, Nik Bessis | Coarse Temporal Attention Network (CTA-Net) for Driver's Activity
Recognition | Extended version of the accepted WACV 2021 | Winter Conference on Applications of Computer Vision (WACV 2021) | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | There is significant progress in recognizing traditional human activities
from videos focusing on highly distinctive actions involving discriminative
body movements, body-object and/or human-human interactions. Driver's
activities are different since they are executed by the same subject with
similar body parts movements, resulting in subtle changes. To address this, we
propose a novel framework by exploiting the spatiotemporal attention to model
the subtle changes. Our model is named Coarse Temporal Attention Network
(CTA-Net), in which coarse temporal branches are introduced in a trainable
glimpse network. The goal is to allow the glimpse to capture high-level
temporal relationships, such as 'during', 'before' and 'after' by focusing on a
specific part of a video. These branches also respect the topology of the
temporal dynamics in the video, ensuring that different branches learn
meaningful spatial and temporal changes. The model then uses an innovative
attention mechanism to generate high-level action specific contextual
information for activity recognition by exploring the hidden states of an LSTM.
The attention mechanism helps in learning to decide the importance of each
hidden state for the recognition task by weighing them when constructing the
representation of the video. Our approach is evaluated on four publicly
accessible datasets and significantly outperforms the state-of-the-art by a
considerable margin with only RGB video as input.
| [
{
"created": "Sun, 17 Jan 2021 10:15:37 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Wharton",
"Zachary",
""
],
[
"Behera",
"Ardhendu",
""
],
[
"Liu",
"Yonghuai",
""
],
[
"Bessis",
"Nik",
""
]
] |
2101.06773 | Adria Ruiz | Adria Ruiz, Antonio Agudo and Francesc Moreno | Generating Attribution Maps with Disentangled Masked Backpropagation | null | Internation Conference on Computer Vision (ICCV), 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribution map visualization has arisen as one of the most effective
techniques to understand the underlying inference process of Convolutional
Neural Networks. In this task, the goal is to compute an score for each image
pixel related with its contribution to the final network output. In this paper,
we introduce Disentangled Masked Backpropagation (DMBP), a novel gradient-based
method that leverages on the piecewise linear nature of ReLU networks to
decompose the model function into different linear mappings. This decomposition
aims to disentangle the positive, negative and nuisance factors from the
attribution maps by learning a set of variables masking the contribution of
each filter during back-propagation. A thorough evaluation over standard
architectures (ResNet50 and VGG16) and benchmark datasets (PASCAL VOC and
ImageNet) demonstrates that DMBP generates more visually interpretable
attribution maps than previous approaches. Additionally, we quantitatively show
that the maps produced by our method are more consistent with the true
contribution of each pixel to the final network output.
| [
{
"created": "Sun, 17 Jan 2021 20:32:14 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Aug 2021 10:47:09 GMT",
"version": "v2"
}
] | 2021-08-31 | [
[
"Ruiz",
"Adria",
""
],
[
"Agudo",
"Antonio",
""
],
[
"Moreno",
"Francesc",
""
]
] |
2101.06829 | Tianxing He | Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl | Joint Energy-based Model Training for Better Calibrated Natural Language
Understanding Models | null | EACL 2021 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we explore joint energy-based model (EBM) training during the
finetuning of pretrained text encoders (e.g., Roberta) for natural language
understanding (NLU) tasks. Our experiments show that EBM training can help the
model reach a better calibration that is competitive to strong baselines, with
little or no loss in accuracy. We discuss three variants of energy functions
(namely scalar, hidden, and sharp-hidden) that can be defined on top of a text
encoder, and compare them in experiments. Due to the discreteness of text data,
we adopt noise contrastive estimation (NCE) to train the energy-based model. To
make NCE training more effective, we train an auto-regressive noise model with
the masked language model (MLM) objective.
| [
{
"created": "Mon, 18 Jan 2021 01:41:31 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Feb 2021 18:36:31 GMT",
"version": "v2"
}
] | 2021-02-22 | [
[
"He",
"Tianxing",
""
],
[
"McCann",
"Bryan",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Hosseini-Asl",
"Ehsan",
""
]
] |
2101.06883 | Guangyu Huo | Guangyu Huo, Yong Zhang, Junbin Gao, Boyue Wang, Yongli Hu, and Baocai
Yin | CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional
Network for Clustering | null | IEEE Transactions on Knowledge and Data Engineering 2021 | 10.1109/TKDE.2021.3125020 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the powerful learning ability of deep convolutional networks, deep
clustering methods can extract the most discriminative information from
individual data and produce more satisfactory clustering results. However,
existing deep clustering methods usually ignore the relationship between the
data. Fortunately, the graph convolutional network can handle such
relationship, opening up a new research direction for deep clustering. In this
paper, we propose a cross-attention based deep clustering framework, named
Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN),
which contains four main modules: the cross-attention fusion module which
innovatively concatenates the Content Auto-encoder module (CAE) relating to the
individual data and Graph Convolutional Auto-encoder module (GAE) relating to
the relationship between the data in a layer-by-layer manner, and the
self-supervised model that highlights the discriminative information for
clustering tasks. While the cross-attention fusion module fuses two kinds of
heterogeneous representation, the CAE module supplements the content
information for the GAE module, which avoids the over-smoothing problem of GCN.
In the GAE module, two novel loss functions are proposed that reconstruct the
content and relationship between the data, respectively. Finally, the
self-supervised module constrains the distributions of the middle layer
representations of CAE and GAE to be consistent. Experimental results on
different types of datasets prove the superiority and robustness of the
proposed CaEGCN.
| [
{
"created": "Mon, 18 Jan 2021 05:21:59 GMT",
"version": "v1"
}
] | 2022-01-10 | [
[
"Huo",
"Guangyu",
""
],
[
"Zhang",
"Yong",
""
],
[
"Gao",
"Junbin",
""
],
[
"Wang",
"Boyue",
""
],
[
"Hu",
"Yongli",
""
],
[
"Yin",
"Baocai",
""
]
] |
2101.06915 | Praveen Damacharla | Praveen Damacharla, Achuth Rao M. V., Jordan Ringenberg, and Ahmad Y
Javaid | TLU-Net: A Deep Learning Approach for Automatic Steel Surface Defect
Detection | null | International Conference on Applied Artificial Intelligence
(ICAPAI 2021), Halden, Norway, May 19-21, 2021 | null | null | cs.CV cs.AI cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Visual steel surface defect detection is an essential step in steel sheet
manufacturing. Several machine learning-based automated visual inspection (AVI)
methods have been studied in recent years. However, most steel manufacturing
industries still use manual visual inspection due to training time and
inaccuracies involved with AVI methods. Automatic steel defect detection
methods could be useful in less expensive and faster quality control and
feedback. But preparing the annotated training data for segmentation and
classification could be a costly process. In this work, we propose to use the
Transfer Learning-based U-Net (TLU-Net) framework for steel surface defect
detection. We use a U-Net architecture as the base and explore two kinds of
encoders: ResNet and DenseNet. We compare these nets' performance using random
initialization and the pre-trained networks trained using the ImageNet data
set. The experiments are performed using Severstal data. The results
demonstrate that the transfer learning performs 5% (absolute) better than that
of the random initialization in defect classification. We found that the
transfer learning performs 26% (relative) better than that of the random
initialization in defect segmentation. We also found the gain of transfer
learning increases as the training data decreases, and the convergence rate
with transfer learning is better than that of the random initialization.
| [
{
"created": "Mon, 18 Jan 2021 07:53:20 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Damacharla",
"Praveen",
""
],
[
"V.",
"Achuth Rao M.",
""
],
[
"Ringenberg",
"Jordan",
""
],
[
"Javaid",
"Ahmad Y",
""
]
] |
2101.07005 | Marta Boche\'nska | Piotr E. Srokosz, Marcin Bujko, Marta Boche\'nska and Rafa{\l}
Ossowski | Optical Flow Method for Measuring Deformation of Soil Specimen Subjected
to Torsional Shearing | To appear in Measurement | Measurement, Vol. 174 (2021) | 10.1016/j.measurement.2021.109064 | null | cs.CE cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this study optical flow method was used for soil small deformation
measurement in laboratory tests. The main objective was to observe how the
deformation distributes along the whole height of cylindrical soil specimen
subjected to torsional shearing (TS test). The experiments were conducted on
dry non-cohesive soil specimens under two values of isotropic pressure.
Specimens were loaded with low-amplitude cyclic torque to analyze the
deformation within the small strain range (0.001-0.01%). Optical flow method
variant by Ce Liu (2009) was used for motion estimation from series of images.
This algorithm uses scale-invariant feature transform (SIFT) for image feature
extraction and coarse-to-fine matching scheme for faster calculations. The
results were validated with the Particle Image Velocimetry (PIV). The results
show that the displacement distribution deviates from commonly assumed
linearity. Moreover, the observed deformation mechanisms analysis suggest that
the shear modulus $G$ commonly determined through TS tests can be considerably
overestimated.
| [
{
"created": "Mon, 18 Jan 2021 11:12:46 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jan 2021 08:46:18 GMT",
"version": "v2"
}
] | 2021-02-09 | [
[
"Srokosz",
"Piotr E.",
""
],
[
"Bujko",
"Marcin",
""
],
[
"Bocheńska",
"Marta",
""
],
[
"Ossowski",
"Rafał",
""
]
] |
2101.07067 | Salma Chaieb | Salma Chaieb and Brahim Hnich and Ali Ben Mrad | Data Obsolescence Detection in the Light of Newly Acquired Valid
Observations | null | Applied Intelligence, 1-23 (2022) | 10.1007/s10489-022-03212-0 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The information describing the conditions of a system or a person is
constantly evolving and may become obsolete and contradict other information. A
database, therefore, must be consistently updated upon the acquisition of new
valid observations that contradict obsolete ones contained in the database. In
this paper, we propose a novel approach for dealing with the information
obsolescence problem. Our approach aims to detect, in real-time, contradictions
between observations and then identify the obsolete ones, given a
representation model. Since we work within an uncertain environment
characterized by the lack of information, we choose to use a Bayesian network
as our representation model and propose a new approximate concept,
$\epsilon$-Contradiction. The new concept is parameterised by a confidence
level of having a contradiction in a set of observations. We propose a
polynomial-time algorithm for detecting obsolete information. We show that the
resulting obsolete information is better represented by an AND-OR tree than a
simple set of observations. Finally, we demonstrate the effectiveness of our
approach on a real elderly fall-prevention database and showcase how this tree
can be used to give reliable recommendations to doctors. Our experiments give
systematically and substantially very good results.
| [
{
"created": "Mon, 18 Jan 2021 13:24:06 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jul 2021 11:08:27 GMT",
"version": "v2"
},
{
"created": "Wed, 4 May 2022 13:12:07 GMT",
"version": "v3"
}
] | 2022-05-05 | [
[
"Chaieb",
"Salma",
""
],
[
"Hnich",
"Brahim",
""
],
[
"Mrad",
"Ali Ben",
""
]
] |
2101.07202 | Christoph Weinhuber | Pranav Ashok, Mathias Jackermeier, Jan K\v{r}et\'insk\'y, Christoph
Weinhuber, Maximilian Weininger, Mayank Yadav | dtControl 2.0: Explainable Strategy Representation via Decision Tree
Learning Steered by Experts | null | TACAS (2) (pp. 326-345). Springer. 2021 | 10.1007/978-3-030-72013-1_17 | null | cs.AI cs.FL cs.LG cs.LO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Recent advances have shown how decision trees are apt data structures for
concisely representing strategies (or controllers) satisfying various
objectives. Moreover, they also make the strategy more explainable. The recent
tool dtControl had provided pipelines with tools supporting strategy synthesis
for hybrid systems, such as SCOTS and Uppaal Stratego. We present dtControl
2.0, a new version with several fundamentally novel features. Most importantly,
the user can now provide domain knowledge to be exploited in the decision tree
learning process and can also interactively steer the process based on the
dynamically provided information. To this end, we also provide a graphical user
interface. It allows for inspection and re-computation of parts of the result,
suggesting as well as receiving advice on predicates, and visual simulation of
the decision-making process. Besides, we interface model checkers of
probabilistic systems, namely Storm and PRISM and provide dedicated support for
categorical enumeration-type state variables. Consequently, the controllers are
more explainable and smaller.
| [
{
"created": "Fri, 15 Jan 2021 11:22:49 GMT",
"version": "v1"
},
{
"created": "Tue, 4 May 2021 10:10:43 GMT",
"version": "v2"
}
] | 2021-05-05 | [
[
"Ashok",
"Pranav",
""
],
[
"Jackermeier",
"Mathias",
""
],
[
"Křetínský",
"Jan",
""
],
[
"Weinhuber",
"Christoph",
""
],
[
"Weininger",
"Maximilian",
""
],
[
"Yadav",
"Mayank",
""
]
] |
2101.07241 | Haoyu Xiong | Haoyu Xiong, Quanzhou Li, Yun-Chun Chen, Homanga Bharadhwaj, Samarth
Sinha, Animesh Garg | Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos | Project Website: https://www.pair.toronto.edu/lbw-kp/ | IROS 2021 | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Learning from visual data opens the potential to accrue a large range of
manipulation behaviors by leveraging human demonstrations without specifying
each of them mathematically, but rather through natural task specification. In
this paper, we present Learning by Watching (LbW), an algorithmic framework for
policy learning through imitation from a single video specifying the task. The
key insights of our method are two-fold. First, since the human arms may not
have the same morphology as robot arms, our framework learns unsupervised human
to robot translation to overcome the morphology mismatch issue. Second, to
capture the details in salient regions that are crucial for learning state
representations, our model performs unsupervised keypoint detection on the
translated robot videos. The detected keypoints form a structured
representation that contains semantically meaningful information and can be
used directly for computing reward and policy learning. We evaluate the
effectiveness of our LbW framework on five robot manipulation tasks, including
reaching, pushing, sliding, coffee making, and drawer closing. Extensive
experimental evaluations demonstrate that our method performs favorably against
the state-of-the-art approaches.
| [
{
"created": "Mon, 18 Jan 2021 18:50:32 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Nov 2021 15:05:21 GMT",
"version": "v2"
}
] | 2021-11-16 | [
[
"Xiong",
"Haoyu",
""
],
[
"Li",
"Quanzhou",
""
],
[
"Chen",
"Yun-Chun",
""
],
[
"Bharadhwaj",
"Homanga",
""
],
[
"Sinha",
"Samarth",
""
],
[
"Garg",
"Animesh",
""
]
] |
2101.07337 | Zijian Zhang | Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand | Dissonance Between Human and Machine Understanding | 23 pages, 5 figures | [J]. Proceedings of the ACM on Human-Computer Interaction, 2019,
3(CSCW): 1-23 | 10.1145/3359158 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Complex machine learning models are deployed in several critical domains
including healthcare and autonomous vehicles nowadays, albeit as functional
black boxes. Consequently, there has been a recent surge in interpreting
decisions of such complex models in order to explain their actions to humans.
Models that correspond to human interpretation of a task are more desirable in
certain contexts and can help attribute liability, build trust, expose biases
and in turn build better models. It is, therefore, crucial to understand how
and which models conform to human understanding of tasks. In this paper, we
present a large-scale crowdsourcing study that reveals and quantifies the
dissonance between human and machine understanding, through the lens of an
image classification task. In particular, we seek to answer the following
questions: Which (well-performing) complex ML models are closer to humans in
their use of features to make accurate predictions? How does task difficulty
affect the feature selection capability of machines in comparison to humans?
Are humans consistently better at selecting features that make image
recognition more accurate? Our findings have important implications on
human-machine collaboration, considering that a long term goal in the field of
artificial intelligence is to make machines capable of learning and reasoning
like humans.
| [
{
"created": "Mon, 18 Jan 2021 21:45:35 GMT",
"version": "v1"
}
] | 2021-01-20 | [
[
"Zhang",
"Zijian",
""
],
[
"Singh",
"Jaspreet",
""
],
[
"Gadiraju",
"Ujwal",
""
],
[
"Anand",
"Avishek",
""
]
] |
2101.07376 | Khalid Alsamadony | Khalid L. Alsamadony, Ertugrul U. Yildirim, Guenther Glatz, Umair bin
Waheed, Sherif M. Hanafy | Deep-Learning Driven Noise Reduction for Reduced Flux Computed
Tomography | null | Sensors 21, no. 5: 1921 (2021) | 10.3390/s21051921 | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have received considerable attention in clinical
imaging, particularly with respect to the reduction of radiation risk. Lowering
the radiation dose by reducing the photon flux inevitably results in the
degradation of the scanned image quality. Thus, researchers have sought to
exploit deep convolutional neural networks (DCNNs) to map low-quality, low-dose
images to higher-dose, higher-quality images thereby minimizing the associated
radiation hazard. Conversely, computed tomography (CT) measurements of
geomaterials are not limited by the radiation dose. In contrast to the human
body, however, geomaterials may be comprised of high-density constituents
causing increased attenuation of the X-Rays. Consequently, higher dosage images
are required to obtain an acceptable scan quality. The problem of prolonged
acquisition times is particularly severe for micro-CT based scanning
technologies. Depending on the sample size and exposure time settings, a single
scan may require several hours to complete. This is of particular concern if
phenomena with an exponential temperature dependency are to be elucidated. A
process may happen too fast to be adequately captured by CT scanning. To
address the aforementioned issues, we apply DCNNs to improve the quality of
rock CT images and reduce exposure times by more than 60\%, simultaneously. We
highlight current results based on micro-CT derived datasets and apply transfer
learning to improve DCNN results without increasing training time. The approach
is applicable to any computed tomography technology. Furthermore, we contrast
the performance of the DCNN trained by minimizing different loss functions such
as mean squared error and structural similarity index.
| [
{
"created": "Mon, 18 Jan 2021 23:31:37 GMT",
"version": "v1"
}
] | 2021-09-14 | [
[
"Alsamadony",
"Khalid L.",
""
],
[
"Yildirim",
"Ertugrul U.",
""
],
[
"Glatz",
"Guenther",
""
],
[
"Waheed",
"Umair bin",
""
],
[
"Hanafy",
"Sherif M.",
""
]
] |
2101.07385 | Maximilian Amsler | Sebastian Ament, Maximilian Amsler, Duncan R. Sutherland, Ming-Chiang
Chang, Dan Guevarra, Aine B. Connolly, John M. Gregoire, Michael O. Thompson,
Carla P. Gomes, R. Bruce van Dover | Autonomous synthesis of metastable materials | null | Autonomous materials synthesis via hierarchical active learning of
nonequilibrium phase diagrams, Science Advances, Vol 7, Issue 5, 2021 | 10.1126/sciadv.abg4930 | null | cond-mat.mtrl-sci cs.AI cs.LG cs.MA physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous experimentation enabled by artificial intelligence (AI) offers a
new paradigm for accelerating scientific discovery. Non-equilibrium materials
synthesis is emblematic of complex, resource-intensive experimentation whose
acceleration would be a watershed for materials discovery and development. The
mapping of non-equilibrium synthesis phase diagrams has recently been
accelerated via high throughput experimentation but still limits materials
research because the parameter space is too vast to be exhaustively explored.
We demonstrate accelerated synthesis and exploration of metastable materials
through hierarchical autonomous experimentation governed by the Scientific
Autonomous Reasoning Agent (SARA). SARA integrates robotic materials synthesis
and characterization along with a hierarchy of AI methods that efficiently
reveal the structure of processing phase diagrams. SARA designs lateral
gradient laser spike annealing (lg-LSA) experiments for parallel materials
synthesis and employs optical spectroscopy to rapidly identify phase
transitions. Efficient exploration of the multi-dimensional parameter space is
achieved with nested active learning (AL) cycles built upon advanced machine
learning models that incorporate the underlying physics of the experiments as
well as end-to-end uncertainty quantification. With this, and the coordination
of AL at multiple scales, SARA embodies AI harnessing of complex scientific
tasks. We demonstrate its performance by autonomously mapping synthesis phase
boundaries for the Bi$_2$O$_3$ system, leading to orders-of-magnitude
acceleration in establishment of a synthesis phase diagram that includes
conditions for kinetically stabilizing $\delta$-Bi$_2$O$_3$ at room
temperature, a critical development for electrochemical technologies such as
solid oxide fuel cells.
| [
{
"created": "Tue, 19 Jan 2021 00:29:26 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Dec 2021 15:16:08 GMT",
"version": "v2"
}
] | 2021-12-21 | [
[
"Ament",
"Sebastian",
""
],
[
"Amsler",
"Maximilian",
""
],
[
"Sutherland",
"Duncan R.",
""
],
[
"Chang",
"Ming-Chiang",
""
],
[
"Guevarra",
"Dan",
""
],
[
"Connolly",
"Aine B.",
""
],
[
"Gregoire",
"John M.",
""
],
[
"Thompson",
"Michael O.",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"van Dover",
"R. Bruce",
""
]
] |
2101.07429 | Fei Gao | Hanliang Jiang, Fuhao Shen, Fei Gao, Weidong Han | Learning Efficient, Explainable and Discriminative Representations for
Pulmonary Nodules Classification | null | Pattern Recognition, 2021 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic pulmonary nodules classification is significant for early diagnosis
of lung cancers. Recently, deep learning techniques have enabled remarkable
progress in this field. However, these deep models are typically of high
computational complexity and work in a black-box manner. To combat these
challenges, in this work, we aim to build an efficient and (partially)
explainable classification model. Specially, we use \emph{neural architecture
search} (NAS) to automatically search 3D network architectures with excellent
accuracy/speed trade-off. Besides, we use the convolutional block attention
module (CBAM) in the networks, which helps us understand the reasoning process.
During training, we use A-Softmax loss to learn angularly discriminative
representations. In the inference stage, we employ an ensemble of diverse
neural networks to improve the prediction accuracy and robustness. We conduct
extensive experiments on the LIDC-IDRI database. Compared with previous
state-of-the-art, our model shows highly comparable performance by using less
than 1/40 parameters. Besides, empirical study shows that the reasoning process
of learned networks is in conformity with physicians' diagnosis. Related code
and results have been released at: https://github.com/fei-hdu/NAS-Lung.
| [
{
"created": "Tue, 19 Jan 2021 02:53:44 GMT",
"version": "v1"
}
] | 2021-01-20 | [
[
"Jiang",
"Hanliang",
""
],
[
"Shen",
"Fuhao",
""
],
[
"Gao",
"Fei",
""
],
[
"Han",
"Weidong",
""
]
] |
2101.07458 | Wei Lian | Wei Lian and Wangmeng Zuo | Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets | null | Neurocomputing, July, 2023 | 10.1016/j.neucom.2023.126482 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In many applications, we need algorithms which can align partially
overlapping point sets and are invariant to the corresponding transformations.
In this work, a method possessing such properties is realized by minimizing the
objective of the robust point matching (RPM) algorithm. We first show that the
RPM objective is a cubic polynomial. We then utilize the convex envelopes of
trilinear and bilinear monomials to derive its lower bound function. The
resulting lower bound problem has the merit that it can be efficiently solved
via linear assignment and low dimensional convex quadratic programming. We next
develop a branch-and-bound (BnB) algorithm which only branches over the
transformation variables and runs efficiently. Experimental results
demonstrated better robustness of the proposed method against non-rigid
deformation, positional noise and outliers in case that outliers are not mixed
with inliers when compared with the state-of-the-art approaches. They also
showed that it has competitive efficiency and scales well with problem size.
| [
{
"created": "Tue, 19 Jan 2021 04:24:23 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jan 2021 07:24:46 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jul 2023 06:46:37 GMT",
"version": "v3"
}
] | 2023-07-06 | [
[
"Lian",
"Wei",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
2101.07523 | Nicolas Becu | Ahmed Laatabi, Nicolas Becu (LIENSs), Nicolas Marilleau (UMMISCO),
C\'ecilia Pignon-Mussaud (LIENSs), Marion Amalric (CITERES), X. Bertin
(LIENSs), Brice Anselme (PRODIG), Elise Beck (PACTE) | Mapping and Describing Geospatial Data to Generalize Complex Mapping and
Describing Geospatial Data to Generalize Complex Models: The Case of
LittoSIM-GEN Models | null | International Journal of Geospatial and Environmental Research,
KAGES, 2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For some scientific questions, empirical data are essential to develop
reliable simulation models. These data usually come from different sources with
diverse and heterogeneous formats. The design of complex data-driven models is
often shaped by the structure of the data available in research projects.
Hence, applying such models to other case studies requires either to get
similar data or to transform new data to fit the model inputs. It is the case
of agent-based models (ABMs) that use advanced data structures such as
Geographic Information Systems data. We faced this problem in the LittoSIM-GEN
project when generalizing our participatory flooding model (LittoSIM) to new
territories. From this experience, we provide a mapping approach to structure,
describe, and automatize the integration of geospatial data into ABMs.
| [
{
"created": "Tue, 19 Jan 2021 09:16:05 GMT",
"version": "v1"
}
] | 2021-01-20 | [
[
"Laatabi",
"Ahmed",
"",
"LIENSs"
],
[
"Becu",
"Nicolas",
"",
"LIENSs"
],
[
"Marilleau",
"Nicolas",
"",
"UMMISCO"
],
[
"Pignon-Mussaud",
"Cécilia",
"",
"LIENSs"
],
[
"Amalric",
"Marion",
"",
"CITERES"
],
[
"Bertin",
"X.",
"",
"LIENSs"
],
[
"Anselme",
"Brice",
"",
"PRODIG"
],
[
"Beck",
"Elise",
"",
"PACTE"
]
] |
2101.07528 | Edouard Oyallon | Louis Thiry (DI-ENS), Michael Arbel (UCL), Eugene Belilovsky (MILA),
Edouard Oyallon (MLIA) | The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels
Methods | null | International Conference on Learning Representation (ICLR 2021),
2021, Vienna (online), Austria | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent line of work showed that various forms of convolutional kernel
methods can be competitive with standard supervised deep convolutional networks
on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while
being more amenable to theoretical analysis. In this work, we highlight the
importance of a data-dependent feature extraction step that is key to the
obtain good performance in convolutional kernel methods. This step typically
corresponds to a whitened dictionary of patches, and gives rise to a
data-driven convolutional kernel methods. We extensively study its effect,
demonstrating it is the key ingredient for high performance of these methods.
Specifically, we show that one of the simplest instances of such kernel
methods, based on a single layer of image patches followed by a linear
classifier is already obtaining classification accuracies on CIFAR-10 in the
same range as previous more sophisticated convolutional kernel methods. We
scale this method to the challenging ImageNet dataset, showing such a simple
approach can exceed all existing non-learned representation methods. This is a
new baseline for object recognition without representation learning methods,
that initiates the investigation of convolutional kernel models on ImageNet. We
conduct experiments to analyze the dictionary that we used, our ablations
showing they exhibit low-dimensional properties.
| [
{
"created": "Tue, 19 Jan 2021 09:30:58 GMT",
"version": "v1"
}
] | 2021-01-20 | [
[
"Thiry",
"Louis",
"",
"DI-ENS"
],
[
"Arbel",
"Michael",
"",
"UCL"
],
[
"Belilovsky",
"Eugene",
"",
"MILA"
],
[
"Oyallon",
"Edouard",
"",
"MLIA"
]
] |
2101.07555 | Ru Li | Ru Li, Shuaicheng Liu, Guangfu Wang, Guanghui Liu and Bing Zeng | JigsawGAN: Auxiliary Learning for Solving Jigsaw Puzzles with Generative
Adversarial Networks | Accepted by IEEE Transactions on Image Processing (TIP) | IEEE Transactions on Image Processing, 2021 | 10.1109/TIP.2021.3120052 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper proposes a solution based on Generative Adversarial Network (GAN)
for solving jigsaw puzzles. The problem assumes that an image is divided into
equal square pieces, and asks to recover the image according to information
provided by the pieces. Conventional jigsaw puzzle solvers often determine the
relationships based on the boundaries of pieces, which ignore the important
semantic information. In this paper, we propose JigsawGAN, a GAN-based
auxiliary learning method for solving jigsaw puzzles with unpaired images (with
no prior knowledge of the initial images). We design a multi-task pipeline that
includes, (1) a classification branch to classify jigsaw permutations, and (2)
a GAN branch to recover features to images in correct orders. The
classification branch is constrained by the pseudo-labels generated according
to the shuffled pieces. The GAN branch concentrates on the image semantic
information, where the generator produces the natural images to fool the
discriminator, while the discriminator distinguishes whether a given image
belongs to the synthesized or the real target domain. These two branches are
connected by a flow-based warp module that is applied to warp features to
correct the order according to the classification results. The proposed method
can solve jigsaw puzzles more efficiently by utilizing both semantic
information and boundary information simultaneously. Qualitative and
quantitative comparisons against several representative jigsaw puzzle solvers
demonstrate the superiority of our method.
| [
{
"created": "Tue, 19 Jan 2021 10:40:38 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Dec 2021 08:21:12 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Jul 2022 08:10:38 GMT",
"version": "v3"
}
] | 2022-07-18 | [
[
"Li",
"Ru",
""
],
[
"Liu",
"Shuaicheng",
""
],
[
"Wang",
"Guangfu",
""
],
[
"Liu",
"Guanghui",
""
],
[
"Zeng",
"Bing",
""
]
] |
2101.07570 | Thomas K.F. Chiu | Thomas K.F. Chiu, Helen Meng, Ching-Sing Chai, Irwin King, Savio Wong
and Yeung Yam | Creation and Evaluation of a Pre-tertiary Artificial Intelligence (AI)
Curriculum | 8 pages 5 figures | IEEE Transactions on Education 65, no. 1 (2021): 30-39 | 0.1109/TE.2021.3085878 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Contributions: The Chinese University of Hong Kong (CUHK)-Jockey Club AI for
the Future Project (AI4Future) co-created an AI curriculum for pre-tertiary
education and evaluated its efficacy. While AI is conventionally taught in
tertiary level education, our co-creation process successfully developed the
curriculum that has been used in secondary school teaching in Hong Kong and
received positive feedback. Background: AI4Future is a cross-sector project
that engages five major partners - CUHK Faculty of Engineering and Faculty of
Education, Hong Kong secondary schools, the government and the AI industry. A
team of 14 professors with expertise in engineering and education collaborated
with 17 principals and teachers from 6 secondary schools to co-create the
curriculum. This team formation bridges the gap between researchers in
engineering and education, together with practitioners in education context.
Research Questions: What are the main features of the curriculum content
developed through the co-creation process? Would the curriculum significantly
improve the students perceived competence in, as well as attitude and
motivation towards AI? What are the teachers perceptions of the co-creation
process that aims to accommodate and foster teacher autonomy? Methodology: This
study adopted a mix of quantitative and qualitative methods and involved 335
student participants. Findings: 1) two main features of learning resources, 2)
the students perceived greater competence, and developed more positive attitude
to learn AI, and 3) the co-creation process generated a variety of resources
which enhanced the teachers knowledge in AI, as well as fostered teachers
autonomy in bringing the subject matter into their classrooms.
| [
{
"created": "Tue, 19 Jan 2021 11:26:19 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Chiu",
"Thomas K. F.",
""
],
[
"Meng",
"Helen",
""
],
[
"Chai",
"Ching-Sing",
""
],
[
"King",
"Irwin",
""
],
[
"Wong",
"Savio",
""
],
[
"Yam",
"Yeung",
""
]
] |
2101.07621 | Tomomi Matsui | Akihiro Kawana and Tomomi Matsui | Trading Transforms of Non-weighted Simple Games and Integer Weights of
Weighted Simple Games | 23 pages | Theory and Decision (2021) | 10.1007/s11238-021-09831-2 | null | cs.GT cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates simple games. A fundamental research question in this
field is to determine necessary and sufficient conditions for a simple game to
be a weighted majority game. Taylor and Zwicker (1992) showed that a simple
game is non-weighted if and only if there exists a trading transform of finite
size. They also provided an upper bound on the size of such a trading
transform, if it exists. Gvozdeva and Slinko (2011) improved that upper bound;
their proof employed a property of linear inequalities demonstrated by Muroga
(1971).In this study, we provide a new proof of the existence of a trading
transform when a given simple game is non-weighted. Our proof employs Farkas'
lemma (1894), and yields an improved upper bound on the size of a trading
transform.
We also discuss an integer-weight representation of a weighted simple game,
improving the bounds obtained by Muroga (1971). We show that our bound on the
quota is tight when the number of players is less than or equal to five, based
on the computational results obtained by Kurz (2012).
Furthermore, we discuss the problem of finding an integer-weight
representation under the assumption that we have minimal winning coalitions and
maximal losing coalitions.In particular, we show a performance of a rounding
method.
Lastly, we address roughly weighted simple games. Gvozdeva and Slinko (2011)
showed that a given simple game is not roughly weighted if and only if there
exists a potent certificate of non-weightedness. We give an upper bound on the
length of a potent certificate of non-weightedness. We also discuss an
integer-weight representation of a roughly weighted simple game.
| [
{
"created": "Tue, 19 Jan 2021 13:54:41 GMT",
"version": "v1"
},
{
"created": "Sat, 29 May 2021 10:21:53 GMT",
"version": "v2"
}
] | 2022-01-13 | [
[
"Kawana",
"Akihiro",
""
],
[
"Matsui",
"Tomomi",
""
]
] |
2101.07685 | Mattia Setzu | Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino
Pedreschi, Fosca Giannotti | GLocalX -- From Local to Global Explanations of Black Box AI Models | 27 pages, 2 figures, submitted to "Special Issue on: Explainable AI
(XAI) for Web-based Information Processing" | Journal of Artificial Intelligence, Volume 294, May 2021, 103457 | 10.1016/j.artint.2021.103457 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Artificial Intelligence (AI) has come to prominence as one of the major
components of our society, with applications in most aspects of our lives. In
this field, complex and highly nonlinear machine learning models such as
ensemble models, deep neural networks, and Support Vector Machines have
consistently shown remarkable accuracy in solving complex tasks. Although
accurate, AI models often are "black boxes" which we are not able to
understand. Relying on these models has a multifaceted impact and raises
significant concerns about their transparency. Applications in sensitive and
critical domains are a strong motivational factor in trying to understand the
behavior of black boxes. We propose to address this issue by providing an
interpretable layer on top of black box models by aggregating "local"
explanations. We present GLocalX, a "local-first" model agnostic explanation
method. Starting from local explanations expressed in form of local decision
rules, GLocalX iteratively generalizes them into global explanations by
hierarchically aggregating them. Our goal is to learn accurate yet simple
interpretable models to emulate the given black box, and, if possible, replace
it entirely. We validate GLocalX in a set of experiments in standard and
constrained settings with limited or no access to either data or local
explanations. Experiments show that GLocalX is able to accurately emulate
several models with simple and small models, reaching state-of-the-art
performance against natively global solutions. Our findings show how it is
often possible to achieve a high level of both accuracy and comprehensibility
of classification models, even in complex domains with high-dimensional data,
without necessarily trading one property for the other. This is a key
requirement for a trustworthy AI, necessary for adoption in high-stakes
decision making applications.
| [
{
"created": "Tue, 19 Jan 2021 15:26:09 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jan 2021 11:26:16 GMT",
"version": "v2"
}
] | 2021-01-29 | [
[
"Setzu",
"Mattia",
""
],
[
"Guidotti",
"Riccardo",
""
],
[
"Monreale",
"Anna",
""
],
[
"Turini",
"Franco",
""
],
[
"Pedreschi",
"Dino",
""
],
[
"Giannotti",
"Fosca",
""
]
] |
2101.07755 | Vladislav Golyanik | Tolga Birdal, Vladislav Golyanik, Christian Theobalt, Leonidas Guibas | Quantum Permutation Synchronization | 19 pages, 15 figures, 4 tables; web pages:
https://vcai.mpi-inf.mpg.de/projects/QUANTUMSYNC/,
https://quantumcomputervision.github.io/ | Computer Vision and Pattern Recognition (CVPR) 2021 | null | null | quant-ph cs.CV cs.ET cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present QuantumSync, the first quantum algorithm for solving a
synchronization problem in the context of computer vision. In particular, we
focus on permutation synchronization which involves solving a non-convex
optimization problem in discrete variables. We start by formulating
synchronization into a quadratic unconstrained binary optimization problem
(QUBO). While such formulation respects the binary nature of the problem,
ensuring that the result is a set of permutations requires extra care. Hence,
we: (I) show how to insert permutation constraints into a QUBO problem and (ii)
solve the constrained QUBO problem on the current generation of the adiabatic
quantum computers D-Wave. Thanks to the quantum annealing, we guarantee global
optimality with high probability while sampling the energy landscape to yield
confidence estimates. Our proof-of-concepts realization on the adiabatic D-Wave
computer demonstrates that quantum machines offer a promising way to solve the
prevalent yet difficult synchronization problems.
| [
{
"created": "Tue, 19 Jan 2021 17:51:02 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Nov 2021 14:57:46 GMT",
"version": "v2"
}
] | 2021-11-29 | [
[
"Birdal",
"Tolga",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
2101.07757 | Milad Sikaroudi | Milad Sikaroudi, Benyamin Ghojogh, Fakhri Karray, Mark Crowley, H.R.
Tizhoosh | Magnification Generalization for Histopathology Image Embedding | Accepted for presentation at International Symposium on Biomedical
Imaging (ISBI'2021) | IEEE 18th International Symposium on Biomedical Imaging (ISBI),
pp.1864-1868, 2021 | 10.1109/ISBI48211.2021.9433978 | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Histopathology image embedding is an active research area in computer vision.
Most of the embedding models exclusively concentrate on a specific
magnification level. However, a useful task in histopathology embedding is to
train an embedding space regardless of the magnification level. Two main
approaches for tackling this goal are domain adaptation and domain
generalization, where the target magnification levels may or may not be
introduced to the model in training, respectively. Although magnification
adaptation is a well-studied topic in the literature, this paper, to the best
of our knowledge, is the first work on magnification generalization for
histopathology image embedding. We use an episodic trainable domain
generalization technique for magnification generalization, namely Model
Agnostic Learning of Semantic Features (MASF), which works based on the Model
Agnostic Meta-Learning (MAML) concept. Our experimental results on a breast
cancer histopathology dataset with four different magnification levels show the
proposed method's effectiveness for magnification generalization.
| [
{
"created": "Mon, 18 Jan 2021 02:46:26 GMT",
"version": "v1"
}
] | 2021-11-05 | [
[
"Sikaroudi",
"Milad",
""
],
[
"Ghojogh",
"Benyamin",
""
],
[
"Karray",
"Fakhri",
""
],
[
"Crowley",
"Mark",
""
],
[
"Tizhoosh",
"H. R.",
""
]
] |
2101.07855 | Furkan Gursoy | Mahsun Alt{\i}n, Furkan G\"ursoy, Lina Xu | Machine-Generated Hierarchical Structure of Human Activities to Reveal
How Machines Think | null | IEEE Access, vol. 9, pp. 18307-18317, 2021 | 10.1109/ACCESS.2021.3053084 | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep-learning based computer vision models have proved themselves to be
ground-breaking approaches to human activity recognition (HAR). However, most
existing works are dedicated to improve the prediction accuracy through either
creating new model architectures, increasing model complexity, or refining
model parameters by training on larger datasets. Here, we propose an
alternative idea, differing from existing work, to increase model accuracy and
also to shape model predictions to align with human understandings through
automatically creating higher-level summarizing labels for similar groups of
human activities. First, we argue the importance and feasibility of
constructing a hierarchical labeling system for human activity recognition.
Then, we utilize the predictions of a black box HAR model to identify
similarities between different activities. Finally, we tailor hierarchical
clustering methods to automatically generate hierarchical trees of activities
and conduct experiments. In this system, the activity labels on the same level
will have a designed magnitude of accuracy and reflect a specific amount of
activity details. This strategy enables a trade-off between the extent of the
details in the recognized activity and the user privacy by masking some
sensitive predictions; and also provides possibilities for the use of formerly
prohibited invasive models in privacy-concerned scenarios. Since the hierarchy
is generated from the machine's perspective, the predictions at the upper
levels provide better accuracy, which is especially useful when there are too
detailed labels in the training set that are rather trivial to the final
prediction goal. Moreover, the analysis of the structure of these trees can
reveal the biases in the prediction model and guide future data collection
strategies.
| [
{
"created": "Tue, 19 Jan 2021 20:40:22 GMT",
"version": "v1"
}
] | 2021-02-04 | [
[
"Altın",
"Mahsun",
""
],
[
"Gürsoy",
"Furkan",
""
],
[
"Xu",
"Lina",
""
]
] |
2101.07973 | Varad Bhatnagar | Varad Bhatnagar, Prince Kumar, Sairam Moghili and Pushpak
Bhattacharyya | Divide and Conquer: An Ensemble Approach for Hostile Post Detection in
Hindi | null | CONSTRAINT @AAAI 2021 Combating Online Hostile Posts in Regional
Languages during Emergency Situation pp244-255 | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently the NLP community has started showing interest towards the
challenging task of Hostile Post Detection. This paper present our system for
Shared Task at Constraint2021 on "Hostile Post Detection in Hindi". The data
for this shared task is provided in Hindi Devanagari script which was collected
from Twitter and Facebook. It is a multi-label multi-class classification
problem where each data instance is annotated into one or more of the five
classes: fake, hate, offensive, defamation, and non-hostile. We propose a two
level architecture which is made up of BERT based classifiers and statistical
classifiers to solve this problem. Our team 'Albatross', scored 0.9709 Coarse
grained hostility F1 score measure on Hostile Post Detection in Hindi subtask
and secured 2nd rank out of 45 teams for the task. Our submission is ranked 2nd
and 3rd out of a total of 156 submissions with Coarse grained hostility F1
score of 0.9709 and 0.9703 respectively. Our fine grained scores are also very
encouraging and can be improved with further finetuning. The code is publicly
available.
| [
{
"created": "Wed, 20 Jan 2021 05:38:07 GMT",
"version": "v1"
}
] | 2021-05-06 | [
[
"Bhatnagar",
"Varad",
""
],
[
"Kumar",
"Prince",
""
],
[
"Moghili",
"Sairam",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
2101.08085 | Xiatian Zhu | Xiatian Zhu and Antoine Toisoul and Juan-Manuel Perez-Rua and Li Zhang
and Brais Martinez and Tao Xiang | Few-shot Action Recognition with Prototype-centered Attentive Learning | 10 pages, 4 figures | BMVC 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Few-shot action recognition aims to recognize action classes with few
training samples. Most existing methods adopt a meta-learning approach with
episodic training. In each episode, the few samples in a meta-training task are
split into support and query sets. The former is used to build a classifier,
which is then evaluated on the latter using a query-centered loss for model
updating. There are however two major limitations: lack of data efficiency due
to the query-centered only loss design and inability to deal with the support
set outlying samples and inter-class distribution overlapping problems. In this
paper, we overcome both limitations by proposing a new Prototype-centered
Attentive Learning (PAL) model composed of two novel components. First, a
prototype-centered contrastive learning loss is introduced to complement the
conventional query-centered learning objective, in order to make full use of
the limited training samples in each episode. Second, PAL further integrates a
hybrid attentive learning mechanism that can minimize the negative impacts of
outliers and promote class separation. Extensive experiments on four standard
few-shot action benchmarks show that our method clearly outperforms previous
state-of-the-art methods, with the improvement particularly significant (10+\%)
on the most challenging fine-grained action recognition benchmark.
| [
{
"created": "Wed, 20 Jan 2021 11:48:12 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2021 23:39:54 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Mar 2021 16:22:37 GMT",
"version": "v3"
},
{
"created": "Sun, 28 Mar 2021 17:15:14 GMT",
"version": "v4"
}
] | 2021-10-26 | [
[
"Zhu",
"Xiatian",
""
],
[
"Toisoul",
"Antoine",
""
],
[
"Perez-Rua",
"Juan-Manuel",
""
],
[
"Zhang",
"Li",
""
],
[
"Martinez",
"Brais",
""
],
[
"Xiang",
"Tao",
""
]
] |
2101.08122 | Devis Tuia | Marrit Leenstra, Diego Marcos, Francesca Bovolo, Devis Tuia | Self-supervised pre-training enhances change detection in Sentinel-2
imagery | Presented at the Pattern Recognition and Remote Sensing (PRRS)
workshop in ICPR, 2021 | Part of the Lecture Notes in Computer Science book series (LNCS,
volume 12667), 2021 | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While annotated images for change detection using satellite imagery are
scarce and costly to obtain, there is a wealth of unlabeled images being
generated every day. In order to leverage these data to learn an image
representation more adequate for change detection, we explore methods that
exploit the temporal consistency of Sentinel-2 times series to obtain a usable
self-supervised learning signal. For this, we build and make publicly available
(https://zenodo.org/record/4280482) the Sentinel-2 Multitemporal Cities Pairs
(S2MTCP) dataset, containing multitemporal image pairs from 1520 urban areas
worldwide. We test the results of multiple self-supervised learning methods for
pre-training models for change detection and apply it on a public change
detection dataset made of Sentinel-2 image pairs (OSCD).
| [
{
"created": "Wed, 20 Jan 2021 13:47:25 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Apr 2021 20:43:10 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Leenstra",
"Marrit",
""
],
[
"Marcos",
"Diego",
""
],
[
"Bovolo",
"Francesca",
""
],
[
"Tuia",
"Devis",
""
]
] |
2101.08211 | Xinwei Yu | Xinwei Yu, Matthew S. Creamer, Francesco Randi, Anuj K. Sharma, Scott
W. Linderman, Andrew M. Leifer | Fast deep learning correspondence for neuron tracking and identification
in C.elegans using synthetic training | 5 figures | eLife 2021;10:e66410 | 10.7554/eLife.66410 | null | q-bio.QM cs.CV q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present an automated method to track and identify neurons in C. elegans,
called "fast Deep Learning Correspondence" or fDLC, based on the transformer
network architecture. The model is trained once on empirically derived
synthetic data and then predicts neural correspondence across held-out real
animals via transfer learning. The same pre-trained model both tracks neurons
across time and identifies corresponding neurons across individuals.
Performance is evaluated against hand-annotated datasets, including NeuroPAL
[1]. Using only position information, the method achieves 80.0% accuracy at
tracking neurons within an individual and 65.8% accuracy at identifying neurons
across individuals. Accuracy is even higher on a published dataset [2].
Accuracy reaches 76.5% when using color information from NeuroPAL. Unlike
previous methods, fDLC does not require straightening or transforming the
animal into a canonical coordinate system. The method is fast and predicts
correspondence in 10 ms making it suitable for future real-time applications.
| [
{
"created": "Wed, 20 Jan 2021 16:46:37 GMT",
"version": "v1"
}
] | 2021-07-16 | [
[
"Yu",
"Xinwei",
""
],
[
"Creamer",
"Matthew S.",
""
],
[
"Randi",
"Francesco",
""
],
[
"Sharma",
"Anuj K.",
""
],
[
"Linderman",
"Scott W.",
""
],
[
"Leifer",
"Andrew M.",
""
]
] |
2101.08286 | Matthew Colbrook | Matthew J. Colbrook, Vegard Antun, Anders C. Hansen | Can stable and accurate neural networks be computed? -- On the barriers
of deep learning and Smale's 18th problem | 14 pages + SI Appendix | Proc. Natl. Acad. Sci. USA, 2022 | 10.1073/pnas.2107151119 | null | cs.LG cs.CV cs.NA cs.NE math.NA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning (DL) has had unprecedented success and is now entering
scientific computing with full force. However, current DL methods typically
suffer from instability, even when universal approximation properties guarantee
the existence of stable neural networks (NNs). We address this paradox by
demonstrating basic well-conditioned problems in scientific computing where one
can prove the existence of NNs with great approximation qualities, however,
there does not exist any algorithm, even randomised, that can train (or
compute) such a NN. For any positive integers $K > 2$ and $L$, there are cases
where simultaneously: (a) no randomised training algorithm can compute a NN
correct to $K$ digits with probability greater than $1/2$, (b) there exists a
deterministic training algorithm that computes a NN with $K-1$ correct digits,
but any such (even randomised) algorithm needs arbitrarily many training data,
(c) there exists a deterministic training algorithm that computes a NN with
$K-2$ correct digits using no more than $L$ training samples. These results
imply a classification theory describing conditions under which (stable) NNs
with a given accuracy can be computed by an algorithm. We begin this theory by
establishing sufficient conditions for the existence of algorithms that compute
stable NNs in inverse problems. We introduce Fast Iterative REstarted NETworks
(FIRENETs), which we both prove and numerically verify are stable. Moreover, we
prove that only $\mathcal{O}(|\log(\epsilon)|)$ layers are needed for an
$\epsilon$-accurate solution to the inverse problem.
| [
{
"created": "Wed, 20 Jan 2021 19:04:17 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Apr 2021 17:09:49 GMT",
"version": "v2"
}
] | 2022-04-04 | [
[
"Colbrook",
"Matthew J.",
""
],
[
"Antun",
"Vegard",
""
],
[
"Hansen",
"Anders C.",
""
]
] |
2101.08345 | Giovanna Menardi | Giovanna Menardi | Nonparametric clustering for image segmentation | null | Statistical Analysis and Data Mining, 13(1), 83-97 (2020) | 10.1002/sam.11444 | null | cs.CV eess.IV stat.AP | http://creativecommons.org/licenses/by/4.0/ | Image segmentation aims at identifying regions of interest within an image,
by grouping pixels according to their properties. This task resembles the
statistical one of clustering, yet many standard clustering methods fail to
meet the basic requirements of image segmentation: segment shapes are often
biased toward predetermined shapes and their number is rarely determined
automatically. Nonparametric clustering is, in principle, free from these
limitations and turns out to be particularly suitable for the task of image
segmentation. This is also witnessed by several operational analogies, as, for
instance, the resort to topological data analysis and spatial tessellation in
both the frameworks. We discuss the application of nonparametric clustering to
image segmentation and provide an algorithm specific for this task. Pixel
similarity is evaluated in terms of density of the color representation and the
adjacency structure of the pixels is exploited to introduce a simple, yet
effective method to identify image segments as disconnected high-density
regions. The proposed method works both to segment an image and to detect its
boundaries and can be seen as a generalization to color images of the class of
thresholding methods.
| [
{
"created": "Wed, 20 Jan 2021 22:27:44 GMT",
"version": "v1"
}
] | 2021-01-22 | [
[
"Menardi",
"Giovanna",
""
]
] |
2101.08387 | Yongquan Yang | Yongquan Yang, Haijun Lv, Ning Chen | A Survey on Ensemble Learning under the Era of Deep Learning | 47 pages, 8 figures, 15 tables | Artificial Intelligence Review, 2022 | 10.1007/s10462-022-10283-5 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the dominant position of deep learning (mostly deep neural networks)
in various artificial intelligence applications, recently, ensemble learning
based on deep neural networks (ensemble deep learning) has shown significant
performances in improving the generalization of learning system. However, since
modern deep neural networks usually have millions to billions of parameters,
the time and space overheads for training multiple base deep learners and
testing with the ensemble deep learner are far greater than that of traditional
ensemble learning. Though several algorithms of fast ensemble deep learning
have been proposed to promote the deployment of ensemble deep learning in some
applications, further advances still need to be made for many applications in
specific fields, where the developing time and computing resources are usually
restricted or the data to be processed is of large dimensionality. An urgent
problem needs to be solved is how to take the significant advantages of
ensemble deep learning while reduce the required expenses so that many more
applications in specific fields can benefit from it. For the alleviation of
this problem, it is essential to know about how ensemble learning has developed
under the era of deep learning. Thus, in this article, we present fundamental
discussions focusing on data analyses of published works, methodologies, recent
advances and unattainability of traditional ensemble learning and ensemble deep
learning. We hope this article will be helpful to realize the intrinsic
problems and technical challenges faced by future developments of ensemble
learning under the era of deep learning.
| [
{
"created": "Thu, 21 Jan 2021 01:33:23 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Apr 2021 03:28:11 GMT",
"version": "v2"
},
{
"created": "Tue, 18 May 2021 03:47:12 GMT",
"version": "v3"
},
{
"created": "Tue, 31 Aug 2021 04:15:33 GMT",
"version": "v4"
},
{
"created": "Mon, 9 May 2022 07:02:31 GMT",
"version": "v5"
},
{
"created": "Wed, 28 Sep 2022 02:07:18 GMT",
"version": "v6"
}
] | 2022-11-07 | [
[
"Yang",
"Yongquan",
""
],
[
"Lv",
"Haijun",
""
],
[
"Chen",
"Ning",
""
]
] |
2101.08434 | Varad Bhatnagar | Ravi Raj, Varad Bhatnagar, Aman Kumar Singh, Sneha Mane and Nilima
Walde | Video Summarization: Study of various techniques | null | Video Summarization: Study of Various Techniques Proceedings of
IRAJ International Conference, 26th May, 2019, Pune, India | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A comparative study of various techniques which can be used for summarization
of Videos i.e. Video to Video conversion is presented along with respective
architecture, results, strengths and shortcomings. In all approaches, a lengthy
video is converted into a shorter video which aims to capture all important
events that are present in the original video. The definition of 'important
event' may vary according to the context, such as a sports video and a
documentary may have different events which are classified as important.
| [
{
"created": "Thu, 21 Jan 2021 04:45:57 GMT",
"version": "v1"
}
] | 2021-01-22 | [
[
"Raj",
"Ravi",
""
],
[
"Bhatnagar",
"Varad",
""
],
[
"Singh",
"Aman Kumar",
""
],
[
"Mane",
"Sneha",
""
],
[
"Walde",
"Nilima",
""
]
] |
2101.08448 | Kishor Bharti Mr. | Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner
Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S.
Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, Al\'an
Aspuru-Guzik | Noisy intermediate-scale quantum (NISQ) algorithms | Added new content, Modified certain parts and the paper structure | Rev. Mod. Phys. 94, 015004 (2022) | 10.1103/RevModPhys.94.015004 | null | quant-ph cond-mat.stat-mech cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A universal fault-tolerant quantum computer that can solve efficiently
problems such as integer factorization and unstructured database search
requires millions of qubits with low error rates and long coherence times.
While the experimental advancement towards realizing such devices will
potentially take decades of research, noisy intermediate-scale quantum (NISQ)
computers already exist. These computers are composed of hundreds of noisy
qubits, i.e. qubits that are not error-corrected, and therefore perform
imperfect operations in a limited coherence time. In the search for quantum
advantage with these devices, algorithms have been proposed for applications in
various disciplines spanning physics, machine learning, quantum chemistry and
combinatorial optimization. The goal of such algorithms is to leverage the
limited available resources to perform classically challenging tasks. In this
review, we provide a thorough summary of NISQ computational paradigms and
algorithms. We discuss the key structure of these algorithms, their
limitations, and advantages. We additionally provide a comprehensive overview
of various benchmarking and software tools useful for programming and testing
NISQ devices.
| [
{
"created": "Thu, 21 Jan 2021 05:27:34 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 14:22:19 GMT",
"version": "v2"
}
] | 2022-02-17 | [
[
"Bharti",
"Kishor",
""
],
[
"Cervera-Lierta",
"Alba",
""
],
[
"Kyaw",
"Thi Ha",
""
],
[
"Haug",
"Tobias",
""
],
[
"Alperin-Lea",
"Sumner",
""
],
[
"Anand",
"Abhinav",
""
],
[
"Degroote",
"Matthias",
""
],
[
"Heimonen",
"Hermanni",
""
],
[
"Kottmann",
"Jakob S.",
""
],
[
"Menke",
"Tim",
""
],
[
"Mok",
"Wai-Keong",
""
],
[
"Sim",
"Sukin",
""
],
[
"Kwek",
"Leong-Chuan",
""
],
[
"Aspuru-Guzik",
"Alán",
""
]
] |
2101.08700 | Terry Ruas Ph.D. | Terry Ruas, William Grosky, Akiko Aizawa | Multi-sense embeddings through a word sense disambiguation process | null | Expert Systems with Applications. Volume 136, 1 December 2019,
Pages 288-303 | 10.1016/j.eswa.2019.06.026 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Natural Language Understanding has seen an increasing number of publications
in the last few years, especially after robust word embeddings models became
prominent, when they proved themselves able to capture and represent semantic
relationships from massive amounts of data. Nevertheless, traditional models
often fall short in intrinsic issues of linguistics, such as polysemy and
homonymy. Any expert system that makes use of natural language in its core, can
be affected by a weak semantic representation of text, resulting in inaccurate
outcomes based on poor decisions. To mitigate such issues, we propose a novel
approach called Most Suitable Sense Annotation (MSSA), that disambiguates and
annotates each word by its specific sense, considering the semantic effects of
its context. Our approach brings three main contributions to the semantic
representation scenario: (i) an unsupervised technique that disambiguates and
annotates words by their senses, (ii) a multi-sense embeddings model that can
be extended to any traditional word embeddings algorithm, and (iii) a recurrent
methodology that allows our models to be re-used and their representations
refined. We test our approach on six different benchmarks for the word
similarity task, showing that our approach can produce state-of-the-art results
and outperforms several more complex state-of-the-art systems.
| [
{
"created": "Thu, 21 Jan 2021 16:22:34 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Dec 2022 10:03:47 GMT",
"version": "v2"
}
] | 2022-12-20 | [
[
"Ruas",
"Terry",
""
],
[
"Grosky",
"William",
""
],
[
"Aizawa",
"Akiko",
""
]
] |
2101.08717 | Jacson Rodrigues Correia-Silva | Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue,
Alberto F. De Souza, Thiago Oliveira-Santos | Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from
Black-box Models? | The code is available at https://github.com/jeiks/Stealing_DL_Models | Pattern Recognition 113 (2021) 107830 | 10.1016/j.patcog.2021.107830 | null | cs.CR cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Convolutional neural networks have been successful lately enabling companies
to develop neural-based products, which demand an expensive process, involving
data acquisition and annotation; and model generation, usually requiring
experts. With all these costs, companies are concerned about the security of
their models against copies and deliver them as black-boxes accessed by APIs.
Nonetheless, we argue that even black-box models still have some
vulnerabilities. In a preliminary work, we presented a simple, yet powerful,
method to copy black-box models by querying them with natural random images. In
this work, we consolidate and extend the copycat method: (i) some constraints
are waived; (ii) an extensive evaluation with several problems is performed;
(iii) models are copied between different architectures; and, (iv) a deeper
analysis is performed by looking at the copycat behavior. Results show that
natural random images are effective to generate copycats for several problems.
| [
{
"created": "Thu, 21 Jan 2021 16:55:14 GMT",
"version": "v1"
}
] | 2021-01-22 | [
[
"Correia-Silva",
"Jacson Rodrigues",
""
],
[
"Berriel",
"Rodrigo F.",
""
],
[
"Badue",
"Claudine",
""
],
[
"De Souza",
"Alberto F.",
""
],
[
"Oliveira-Santos",
"Thiago",
""
]
] |
2101.08732 | Lang Huang | Lang Huang, Chao Zhang and Hongyang Zhang | Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning | Accepted at T-PAMI. Journal version of arXiv:2002.10319 [cs.LG]
(NeurIPS2020). 22 pages, 15 figures, 13 tables | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2022 | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose self-adaptive training -- a unified training algorithm that
dynamically calibrates and enhances training processes by model predictions
without incurring an extra computational cost -- to advance both supervised and
self-supervised learning of deep neural networks. We analyze the training
dynamics of deep networks on training data that are corrupted by, e.g., random
noise and adversarial examples. Our analysis shows that model predictions are
able to magnify useful underlying information in data and this phenomenon
occurs broadly even in the absence of any label information, highlighting that
model predictions could substantially benefit the training processes:
self-adaptive training improves the generalization of deep networks under noise
and enhances the self-supervised representation learning. The analysis also
sheds light on understanding deep learning, e.g., a potential explanation of
the recently-discovered double-descent phenomenon in empirical risk
minimization and the collapsing issue of the state-of-the-art self-supervised
learning algorithms. Experiments on the CIFAR, STL, and ImageNet datasets
verify the effectiveness of our approach in three applications: classification
with label noise, selective classification, and linear evaluation. To
facilitate future research, the code has been made publicly available at
https://github.com/LayneH/self-adaptive-training.
| [
{
"created": "Thu, 21 Jan 2021 17:17:30 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Dec 2021 08:43:44 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Oct 2022 07:38:57 GMT",
"version": "v3"
}
] | 2022-10-17 | [
[
"Huang",
"Lang",
""
],
[
"Zhang",
"Chao",
""
],
[
"Zhang",
"Hongyang",
""
]
] |
2101.08904 | Xiong Liu | Zhaoyi Chen, Xiong Liu, William Hogan, Elizabeth Shenkman, Jiang Bian | Applications of artificial intelligence in drug development using
real-world data | null | Drug Discovery Today 2020 | 10.1016/j.drudis.2020.12.013 | PMID: 33358699 | cs.CY cs.CL cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The US Food and Drug Administration (FDA) has been actively promoting the use
of real-world data (RWD) in drug development. RWD can generate important
real-world evidence reflecting the real-world clinical environment where the
treatments are used. Meanwhile, artificial intelligence (AI), especially
machine- and deep-learning (ML/DL) methods, have been increasingly used across
many stages of the drug development process. Advancements in AI have also
provided new strategies to analyze large, multidimensional RWD. Thus, we
conducted a rapid review of articles from the past 20 years, to provide an
overview of the drug development studies that use both AI and RWD. We found
that the most popular applications were adverse event detection, trial
recruitment, and drug repurposing. Here, we also discuss current research gaps
and future opportunities.
| [
{
"created": "Fri, 22 Jan 2021 01:13:54 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Feb 2021 17:59:01 GMT",
"version": "v2"
}
] | 2021-02-03 | [
[
"Chen",
"Zhaoyi",
""
],
[
"Liu",
"Xiong",
""
],
[
"Hogan",
"William",
""
],
[
"Shenkman",
"Elizabeth",
""
],
[
"Bian",
"Jiang",
""
]
] |
2101.08993 | Vivian Wen Hui Wong | Vivian Wen Hui Wong, Max Ferguson, Kincho H. Law, Yung-Tsun Tina Lee,
Paul Witherell | Automatic Volumetric Segmentation of Additive Manufacturing Defects with
3D U-Net | Accepted by AAAI 2020 Spring Symposia | AAAI 2020 Spring Symposia, Stanford, CA, USA, Mar 23-25, 2020 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmentation of additive manufacturing (AM) defects in X-ray Computed
Tomography (XCT) images is challenging, due to the poor contrast, small sizes
and variation in appearance of defects. Automatic segmentation can, however,
provide quality control for additive manufacturing. Over recent years,
three-dimensional convolutional neural networks (3D CNNs) have performed well
in the volumetric segmentation of medical images. In this work, we leverage
techniques from the medical imaging domain and propose training a 3D U-Net
model to automatically segment defects in XCT images of AM samples. This work
not only contributes to the use of machine learning for AM defect detection but
also demonstrates for the first time 3D volumetric segmentation in AM. We train
and test with three variants of the 3D U-Net on an AM dataset, achieving a mean
intersection of union (IOU) value of 88.4%.
| [
{
"created": "Fri, 22 Jan 2021 08:24:54 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Wong",
"Vivian Wen Hui",
""
],
[
"Ferguson",
"Max",
""
],
[
"Law",
"Kincho H.",
""
],
[
"Lee",
"Yung-Tsun Tina",
""
],
[
"Witherell",
"Paul",
""
]
] |
2101.09021 | Hoang Trinh Man | Trinh Man Hoang, Jinjia Zhou | B-DRRN: A Block Information Constrained Deep Recursive Residual Network
for Video Compression Artifacts Reduction | null | 2019 Picture Coding Symposium (PCS), Ningbo, China, 2019, pp. 1-5 | 10.1109/PCS48520.2019.8954521 | null | eess.IV cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the video compression ratio nowadays becomes higher, the video
coders such as H.264/AVC, H.265/HEVC, H.266/VVC always suffer from the video
artifacts. In this paper, we design a neural network to enhance the quality of
the compressed frame by leveraging the block information, called B-DRRN (Deep
Recursive Residual Network with Block information). Firstly, an extra network
branch is designed for leveraging the block information of the coding unit
(CU). Moreover, to avoid a great increase in the network size, Recursive
Residual structure and sharing weight techniques are applied. We also conduct a
new large-scale dataset with 209,152 training samples. Experimental results
show that the proposed B-DRRN can reduce 6.16% BD-rate compared to the HEVC
standard. After efficiently adding an extra network branch, this work can
improve the performance of the main network without increasing any memory for
storing.
| [
{
"created": "Fri, 22 Jan 2021 09:35:44 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 05:52:08 GMT",
"version": "v2"
}
] | 2021-02-02 | [
[
"Hoang",
"Trinh Man",
""
],
[
"Zhou",
"Jinjia",
""
]
] |
2101.09023 | Terry Ruas Ph.D. | Terry Ruas, Charles Henrique Porto Ferreira, William Grosky,
Fabr\'icio Olivetti de Fran\c{c}a, D\'ebora Maria Rossi Medeiros | Enhanced word embeddings using multi-semantic representation through
lexical chains | null | Information Sciences. Volume 532, September 2020, Pages 16-32 | 10.1016/j.ins.2020.04.048 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The relationship between words in a sentence often tells us more about the
underlying semantic content of a document than its actual words, individually.
In this work, we propose two novel algorithms, called Flexible Lexical Chain II
and Fixed Lexical Chain II. These algorithms combine the semantic relations
derived from lexical chains, prior knowledge from lexical databases, and the
robustness of the distributional hypothesis in word embeddings as building
blocks forming a single system. In short, our approach has three main
contributions: (i) a set of techniques that fully integrate word embeddings and
lexical chains; (ii) a more robust semantic representation that considers the
latent relation between words in a document; and (iii) lightweight word
embeddings models that can be extended to any natural language task. We intend
to assess the knowledge of pre-trained models to evaluate their robustness in
the document classification task. The proposed techniques are tested against
seven word embeddings algorithms using five different machine learning
classifiers over six scenarios in the document classification task. Our results
show the integration between lexical chains and word embeddings representations
sustain state-of-the-art results, even against more complex systems.
| [
{
"created": "Fri, 22 Jan 2021 09:43:33 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Dec 2022 10:16:23 GMT",
"version": "v2"
}
] | 2022-12-20 | [
[
"Ruas",
"Terry",
""
],
[
"Ferreira",
"Charles Henrique Porto",
""
],
[
"Grosky",
"William",
""
],
[
"de França",
"Fabrício Olivetti",
""
],
[
"Medeiros",
"Débora Maria Rossi",
""
]
] |
2101.09048 | Shiwei Liu | Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy | Selfish Sparse RNN Training | Published in Proceedings of the 38th International Conference on
Machine Learning. Code can be found in
https://github.com/Shiweiliuiiiiiii/Selfish-RNN | Proceedings of the 38th International Conference on Machine
Learning (2021) | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse neural networks have been widely applied to reduce the computational
demands of training and deploying over-parameterized deep neural networks. For
inference acceleration, methods that discover a sparse network from a
pre-trained dense network (dense-to-sparse training) work effectively.
Recently, dynamic sparse training (DST) has been proposed to train sparse
neural networks without pre-training a dense model (sparse-to-sparse training),
so that the training process can also be accelerated. However, previous
sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs)
and Convolutional Neural Networks (CNNs), failing to match the performance of
dense-to-sparse methods in the Recurrent Neural Networks (RNNs) setting. In
this paper, we propose an approach to train intrinsically sparse RNNs with a
fixed parameter count in one single run, without compromising performance.
During training, we allow RNN layers to have a non-uniform redistribution
across cell gates for better regularization. Further, we propose SNT-ASGD, a
novel variant of the averaged stochastic gradient optimizer, which
significantly improves the performance of all sparse training methods for RNNs.
Using these strategies, we achieve state-of-the-art sparse training results,
better than the dense-to-sparse methods, with various types of RNNs on Penn
TreeBank and Wikitext-2 datasets. Our codes are available at
https://github.com/Shiweiliuiiiiiii/Selfish-RNN.
| [
{
"created": "Fri, 22 Jan 2021 10:45:40 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 16:38:09 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Jun 2021 05:46:23 GMT",
"version": "v3"
}
] | 2021-06-16 | [
[
"Liu",
"Shiwei",
""
],
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Pei",
"Yulong",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
2101.09129 | Nicola Messina | Nicola Messina, Giuseppe Amato, Fabio Carrara, Claudio Gennaro,
Fabrizio Falchi | Solving the Same-Different Task with Convolutional Neural Networks | Preprint of the paper published in Patter Recognition Letters
(Elsevier) | Pattern Recognition Letters, Volume 143, March 2021, Pages 75-80 | 10.1016/j.patrec.2020.12.019 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning demonstrated major abilities in solving many kinds of different
real-world problems in computer vision literature. However, they are still
strained by simple reasoning tasks that humans consider easy to solve. In this
work, we probe current state-of-the-art convolutional neural networks on a
difficult set of tasks known as the same-different problems. All the problems
require the same prerequisite to be solved correctly: understanding if two
random shapes inside the same image are the same or not. With the experiments
carried out in this work, we demonstrate that residual connections, and more
generally the skip connections, seem to have only a marginal impact on the
learning of the proposed problems. In particular, we experiment with DenseNets,
and we examine the contribution of residual and recurrent connections in
already tested architectures, ResNet-18, and CorNet-S respectively. Our
experiments show that older feed-forward networks, AlexNet and VGG, are almost
unable to learn the proposed problems, except in some specific scenarios. We
show that recently introduced architectures can converge even in the cases
where the important parts of their architecture are removed. We finally carry
out some zero-shot generalization tests, and we discover that in these
scenarios residual and recurrent connections can have a stronger impact on the
overall test accuracy. On four difficult problems from the SVRT dataset, we can
reach state-of-the-art results with respect to the previous approaches,
obtaining super-human performances on three of the four problems.
| [
{
"created": "Fri, 22 Jan 2021 14:35:33 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Messina",
"Nicola",
""
],
[
"Amato",
"Giuseppe",
""
],
[
"Carrara",
"Fabio",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Falchi",
"Fabrizio",
""
]
] |
2101.09163 | Aidong Yang | Ye Ouyang (1), Lilei Wang (1), Aidong Yang (1), Maulik Shah (2), David
Belanger (3 and 4), Tongqing Gao (5), Leping Wei (6), Yaqin Zhang (7) ((1)
AsiaInfo Technologies, (2) Verizon, (3) AT&T, (4) Stevens Institute of
Technology, (5) China Mobile, (6) China Telecom, (7) Tsinghua University) | The Next Decade of Telecommunications Artificial Intelligence | 50 pages in English 24 figures. (Note version 5 is 19 pages, in
Chinese, with 24 figures) | CAAI Artificial Intelligence Research, 2022, 1 (1): 28-53 | 10.26599/AIR.2022.9150003 | null | cs.NI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been an exciting journey since the mobile communications and
artificial intelligence were conceived 37 years and 64 years ago. While both
fields evolved independently and profoundly changed communications and
computing industries, the rapid convergence of 5G and deep learning is
beginning to significantly transform the core communication infrastructure,
network management and vertical applications. The paper first outlines the
individual roadmaps of mobile communications and artificial intelligence in the
early stage, with a concentration to review the era from 3G to 5G when AI and
mobile communications started to converge. With regard to telecommunications
artificial intelligence, the paper further introduces in detail the progress of
artificial intelligence in the ecosystem of mobile communications. The paper
then summarizes the classifications of AI in telecom ecosystems along with its
evolution paths specified by various international telecommunications
standardization bodies. Towards the next decade, the paper forecasts the
prospective roadmap of telecommunications artificial intelligence. In line with
3GPP and ITU-R timeline of 5G & 6G, the paper further explores the network
intelligence following 3GPP and ORAN routes respectively, experience and
intention driven network management and operation, network AI signalling
system, intelligent middle-office based BSS, intelligent customer experience
management and policy control driven by BSS and OSS convergence, evolution from
SLA to ELA, and intelligent private network for verticals. The paper is
concluded with the vision that AI will reshape the future B5G or 6G landscape
and we need pivot our R&D, standardizations, and ecosystem to fully take the
unprecedented opportunities.
| [
{
"created": "Tue, 19 Jan 2021 07:33:44 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jan 2021 02:25:23 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Feb 2021 10:19:47 GMT",
"version": "v3"
},
{
"created": "Mon, 1 Mar 2021 14:41:49 GMT",
"version": "v4"
},
{
"created": "Thu, 2 Dec 2021 02:25:55 GMT",
"version": "v5"
},
{
"created": "Fri, 3 Dec 2021 02:18:41 GMT",
"version": "v6"
}
] | 2022-10-11 | [
[
"Ouyang",
"Ye",
"",
"3 and 4"
],
[
"Wang",
"Lilei",
"",
"3 and 4"
],
[
"Yang",
"Aidong",
"",
"3 and 4"
],
[
"Shah",
"Maulik",
"",
"3 and 4"
],
[
"Belanger",
"David",
"",
"3 and 4"
],
[
"Gao",
"Tongqing",
""
],
[
"Wei",
"Leping",
""
],
[
"Zhang",
"Yaqin",
""
]
] |
2101.09176 | Luis Leiva | Luis A. Leiva, Yunfei Xue, Avya Bansal, Hamed R. Tavakoli,
Tu\u{g}\c{c}e K\"oro\u{g}lu, Niraj R. Dayama, Antti Oulasvirta | Understanding Visual Saliency in Mobile User Interfaces | null | Proceedings of the 22nd Intl. Conf. on Human-Computer Interaction
with Mobile Devices and Services (MobileHCI), 2020 | 10.1145/3379503.3403557 | null | cs.HC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For graphical user interface (UI) design, it is important to understand what
attracts visual attention. While previous work on saliency has focused on
desktop and web-based UIs, mobile app UIs differ from these in several
respects. We present findings from a controlled study with 30 participants and
193 mobile UIs. The results speak to a role of expectations in guiding where
users look at. Strong bias toward the top-left corner of the display, text, and
images was evident, while bottom-up features such as color or size affected
saliency less. Classic, parameter-free saliency models showed a weak fit with
the data, and data-driven models improved significantly when trained
specifically on this dataset (e.g., NSS rose from 0.66 to 0.84). We also
release the first annotated dataset for investigating visual saliency in mobile
UIs.
| [
{
"created": "Fri, 22 Jan 2021 15:45:13 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Leiva",
"Luis A.",
""
],
[
"Xue",
"Yunfei",
""
],
[
"Bansal",
"Avya",
""
],
[
"Tavakoli",
"Hamed R.",
""
],
[
"Köroğlu",
"Tuğçe",
""
],
[
"Dayama",
"Niraj R.",
""
],
[
"Oulasvirta",
"Antti",
""
]
] |
2101.09193 | Petra Bevandi\'c | Petra Bevandi\'c, Ivan Kre\v{s}o, Marin Or\v{s}i\'c, Sini\v{s}a
\v{S}egvi\'c | Dense outlier detection and open-set recognition based on training with
noisy negative images | Published in Image and Vision Computing | Image and Vision Computing, Vol. 124, 2022, 104490 | 10.1016/j.imavis.2022.104490 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional models often produce inadequate predictions for inputs
foreign to the training distribution. Consequently, the problem of detecting
outlier images has recently been receiving a lot of attention. Unlike most
previous work, we address this problem in the dense prediction context in order
to be able to locate outlier objects in front of in-distribution background.
Our approach is based on two reasonable assumptions. First, we assume that the
inlier dataset is related to some narrow application field (e.g.~road driving).
Second, we assume that there exists a general-purpose dataset which is much
more diverse than the inlier dataset (e.g.~ImageNet-1k). We consider pixels
from the general-purpose dataset as noisy negative training samples since most
(but not all) of them are outliers. We encourage the model to recognize borders
between known and unknown by pasting jittered negative patches over inlier
training images. Our experiments target two dense open-set recognition
benchmarks (WildDash 1 and Fishyscapes) and one dense open-set recognition
dataset (StreetHazard). Extensive performance evaluation indicates competitive
potential of the proposed approach.
| [
{
"created": "Fri, 22 Jan 2021 16:31:36 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Feb 2022 17:08:29 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Mar 2024 09:22:32 GMT",
"version": "v3"
}
] | 2024-03-13 | [
[
"Bevandić",
"Petra",
""
],
[
"Krešo",
"Ivan",
""
],
[
"Oršić",
"Marin",
""
],
[
"Šegvić",
"Siniša",
""
]
] |
2101.09343 | Bin Han | Amina Lejla Ibrahimpasic, Bin Han, and Hans D. Schotten | AI-Empowered VNF Migration as a Cost-Loss-Effective Solution for Network
Resilience | Accepted by the IEEE WCNC 2021 Workshop on Intelligent Computing and
Caching at the Network Edge | 2021 IEEE Wireless Communications and Networking Conference
Workshops (WCNCW) | 10.1109/WCNCW49093.2021.9420029 | null | cs.NI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With a wide deployment of Multi-Access Edge Computing (MEC) in the Fifth
Generation (5G) mobile networks, virtual network functions (VNF) can be
flexibly migrated between difference locations, and therewith significantly
enhances the network resilience to counter the degradation in quality of
service (QoS) due to network function outages. A balance has to be taken
carefully, between the loss reduced by VNF migration and the operations cost
generated thereby. To achieve this in practical scenarios with realistic user
behavior, it calls for models of both cost and user mobility. This paper
proposes a novel cost model and a AI-empowered approach for a rational
migration of stateful VNFs, which minimizes the sum of operations cost and
potential loss caused by outages, and is capable to deal with the complex
realistic user mobility patterns.
| [
{
"created": "Fri, 22 Jan 2021 21:47:41 GMT",
"version": "v1"
}
] | 2021-11-30 | [
[
"Ibrahimpasic",
"Amina Lejla",
""
],
[
"Han",
"Bin",
""
],
[
"Schotten",
"Hans D.",
""
]
] |
2101.09345 | Fouzi Harrag | Fouzi Harrag, Maria Debbah, Kareem Darwish, Ahmed Abdelali | BERT Transformer model for Detecting Arabic GPT2 Auto-Generated Tweets | null | Proceedings of the Fifth Arabic Natural Language Processing
Workshop (WANLP @ COLING 2020) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | During the last two decades, we have progressively turned to the Internet and
social media to find news, entertain conversations and share opinion. Recently,
OpenAI has developed a ma-chine learning system called GPT-2 for Generative
Pre-trained Transformer-2, which can pro-duce deepfake texts. It can generate
blocks of text based on brief writing prompts that look like they were written
by humans, facilitating the spread false or auto-generated text. In line with
this progress, and in order to counteract potential dangers, several methods
have been pro-posed for detecting text written by these language models. In
this paper, we propose a transfer learning based model that will be able to
detect if an Arabic sentence is written by humans or automatically generated by
bots. Our dataset is based on tweets from a previous work, which we have
crawled and extended using the Twitter API. We used GPT2-Small-Arabic to
generate fake Arabic Sentences. For evaluation, we compared different recurrent
neural network (RNN) word embeddings based baseline models, namely: LSTM,
BI-LSTM, GRU and BI-GRU, with a transformer-based model. Our new
transfer-learning model has obtained an accuracy up to 98%. To the best of our
knowledge, this work is the first study where ARABERT and GPT2 were combined to
detect and classify the Arabic auto-generated texts.
| [
{
"created": "Fri, 22 Jan 2021 21:50:38 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Harrag",
"Fouzi",
""
],
[
"Debbah",
"Maria",
""
],
[
"Darwish",
"Kareem",
""
],
[
"Abdelali",
"Ahmed",
""
]
] |
2101.09376 | Aaron Hertzmann | Aaron Hertzmann | The Role of Edges in Line Drawing Perception | Accepted to _Perception_ | Perception. 2021;50(3):266-275 | 10.1177/0301006621994407 | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has often been conjectured that the effectiveness of line drawings can be
explained by the similarity of edge images to line drawings. This paper
presents several problems with explaining line drawing perception in terms of
edges, and how the recently-proposed Realism Hypothesis of Hertzmann (2020)
resolves these problems. There is nonetheless existing evidence that edges are
often the best features for predicting where people draw lines; this paper
describes how the Realism Hypothesis can explain this evidence.
| [
{
"created": "Fri, 22 Jan 2021 23:22:05 GMT",
"version": "v1"
}
] | 2021-03-15 | [
[
"Hertzmann",
"Aaron",
""
]
] |
2101.09397 | Juan Irving Vasquez-Gomez | J. Irving Vasquez-Gomez and David Troncoso and Israel Becerra and
Enrique Sucar and Rafael Murrieta-Cid | Next-best-view Regression using a 3D Convolutional Neural Network | Accepted to Machine Vision and Applications | Machine Vision and Applications 32, 42 (2021) | 10.1007/s00138-020-01166-2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated three-dimensional (3D) object reconstruction is the task of
building a geometric representation of a physical object by means of sensing
its surface. Even though new single view reconstruction techniques can predict
the surface, they lead to incomplete models, specially, for non commons objects
such as antique objects or art sculptures. Therefore, to achieve the task's
goals, it is essential to automatically determine the locations where the
sensor will be placed so that the surface will be completely observed. This
problem is known as the next-best-view problem. In this paper, we propose a
data-driven approach to address the problem. The proposed approach trains a 3D
convolutional neural network (3D CNN) with previous reconstructions in order to
regress the \btxt{position of the} next-best-view. To the best of our
knowledge, this is one of the first works that directly infers the
next-best-view in a continuous space using a data-driven approach for the 3D
object reconstruction task. We have validated the proposed approach making use
of two groups of experiments. In the first group, several variants of the
proposed architecture are analyzed. Predicted next-best-views were observed to
be closely positioned to the ground truth. In the second group of experiments,
the proposed approach is requested to reconstruct several unseen objects,
namely, objects not considered by the 3D CNN during training nor validation.
Coverage percentages of up to 90 \% were observed. With respect to current
state-of-the-art methods, the proposed approach improves the performance of
previous next-best-view classification approaches and it is quite fast in
running time (3 frames per second), given that it does not compute the
expensive ray tracing required by previous information metrics.
| [
{
"created": "Sat, 23 Jan 2021 01:50:26 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Vasquez-Gomez",
"J. Irving",
""
],
[
"Troncoso",
"David",
""
],
[
"Becerra",
"Israel",
""
],
[
"Sucar",
"Enrique",
""
],
[
"Murrieta-Cid",
"Rafael",
""
]
] |
2101.09412 | Yazhou Yao | Huafeng Liu, Chuanyi Zhang, Yazhou Yao, Xiushen Wei, Fumin Shen, Jian
Zhang, and Zhenmin Tang | Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating
Noisy Samples and Utilizing Hard Ones | null | IEEE Transactions on Multimedia, 2021 | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Labeling objects at a subordinate level typically requires expert knowledge,
which is not always available when using random annotators. As such, learning
directly from web images for fine-grained recognition has attracted broad
attention. However, the presence of label noise and hard examples in web images
are two obstacles for training robust fine-grained recognition models.
Therefore, in this paper, we propose a novel approach for removing irrelevant
samples from real-world web images during training, while employing useful hard
examples to update the network. Thus, our approach can alleviate the harmful
effects of irrelevant noisy web images and hard examples to achieve better
performance. Extensive experiments on three commonly used fine-grained datasets
demonstrate that our approach is far superior to current state-of-the-art
web-supervised methods.
| [
{
"created": "Sat, 23 Jan 2021 03:58:10 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Liu",
"Huafeng",
""
],
[
"Zhang",
"Chuanyi",
""
],
[
"Yao",
"Yazhou",
""
],
[
"Wei",
"Xiushen",
""
],
[
"Shen",
"Fumin",
""
],
[
"Zhang",
"Jian",
""
],
[
"Tang",
"Zhenmin",
""
]
] |
2101.09459 | Chongming Gao | Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, Tat-Seng
Chua | Advances and Challenges in Conversational Recommender Systems: A Survey | 33 pages, 8 figures, 6 tables | AI Open. Vol. 2. (2021) 100-126 | 10.1016/j.aiopen.2021.06.002 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems exploit interaction history to estimate user preference,
having been heavily used in a wide range of industry applications. However,
static recommendation models are difficult to answer two important questions
well due to inherent shortcomings: (a) What exactly does a user like? (b) Why
does a user like an item? The shortcomings are due to the way that static
models learn user preference, i.e., without explicit instructions and active
feedback from users. The recent rise of conversational recommender systems
(CRSs) changes this situation fundamentally. In a CRS, users and the system can
dynamically communicate through natural language interactions, which provide
unprecedented opportunities to explicitly obtain the exact preference of users.
Considerable efforts, spread across disparate settings and applications, have
been put into developing CRSs. Existing models, technologies, and evaluation
methods for CRSs are far from mature. In this paper, we provide a systematic
review of the techniques used in current CRSs. We summarize the key challenges
of developing CRSs in five directions: (1) Question-based user preference
elicitation. (2) Multi-turn conversational recommendation strategies. (3)
Dialogue understanding and generation. (4) Exploitation-exploration trade-offs.
(5) Evaluation and user simulation. These research directions involve multiple
research fields like information retrieval (IR), natural language processing
(NLP), and human-computer interaction (HCI). Based on these research
directions, we discuss some future challenges and opportunities. We provide a
road map for researchers from multiple communities to get started in this area.
We hope this survey can help to identify and address challenges in CRSs and
inspire future research.
| [
{
"created": "Sat, 23 Jan 2021 08:53:15 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jan 2021 13:26:00 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jan 2021 09:10:08 GMT",
"version": "v3"
},
{
"created": "Thu, 4 Feb 2021 15:45:37 GMT",
"version": "v4"
},
{
"created": "Sun, 7 Feb 2021 03:58:16 GMT",
"version": "v5"
},
{
"created": "Thu, 27 May 2021 04:10:53 GMT",
"version": "v6"
},
{
"created": "Fri, 24 Sep 2021 02:20:45 GMT",
"version": "v7"
}
] | 2021-09-27 | [
[
"Gao",
"Chongming",
""
],
[
"Lei",
"Wenqiang",
""
],
[
"He",
"Xiangnan",
""
],
[
"de Rijke",
"Maarten",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
2101.09461 | Gennaro Vessio Dr. | Moises Diaz, Momina Moetesum, Imran Siddiqi, Gennaro Vessio | Sequence-based Dynamic Handwriting Analysis for Parkinson's Disease
Detection with One-dimensional Convolutions and BiGRUs | null | Expert Systems with Applications, Volume 168, 15 April 2021,
114405 | 10.1016/j.eswa.2020.114405 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Parkinson's disease (PD) is commonly characterized by several motor symptoms,
such as bradykinesia, akinesia, rigidity, and tremor. The analysis of patients'
fine motor control, particularly handwriting, is a powerful tool to support PD
assessment. Over the years, various dynamic attributes of handwriting, such as
pen pressure, stroke speed, in-air time, etc., which can be captured with the
help of online handwriting acquisition tools, have been evaluated for the
identification of PD. Motion events, and their associated spatio-temporal
properties captured in online handwriting, enable effective classification of
PD patients through the identification of unique sequential patterns. This
paper proposes a novel classification model based on one-dimensional
convolutions and Bidirectional Gated Recurrent Units (BiGRUs) to assess the
potential of sequential information of handwriting in identifying Parkinsonian
symptoms. One-dimensional convolutions are applied to raw sequences as well as
derived features; the resulting sequences are then fed to BiGRU layers to
achieve the final classification. The proposed method outperformed
state-of-the-art approaches on the PaHaW dataset and achieved competitive
results on the NewHandPD dataset.
| [
{
"created": "Sat, 23 Jan 2021 09:25:13 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Diaz",
"Moises",
""
],
[
"Moetesum",
"Momina",
""
],
[
"Siddiqi",
"Imran",
""
],
[
"Vessio",
"Gennaro",
""
]
] |
2101.09642 | Hoang Trinh Man | Trinh Man Hoang, Jinjia Zhou, Yibo Fan | Image Compression with Encoder-Decoder Matched Semantic Segmentation | null | 2020 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp. 619-623 | 10.1109/CVPRW50498.2020.00088 | null | eess.IV cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, layered image compression is demonstrated to be a promising
direction, which encodes a compact representation of the input image and apply
an up-sampling network to reconstruct the image. To further improve the quality
of the reconstructed image, some works transmit the semantic segment together
with the compressed image data. Consequently, the compression ratio is also
decreased because extra bits are required for transmitting the semantic
segment. To solve this problem, we propose a new layered image compression
framework with encoder-decoder matched semantic segmentation (EDMS). And then,
followed by the semantic segmentation, a special convolution neural network is
used to enhance the inaccurate semantic segment. As a result, the accurate
semantic segment can be obtained in the decoder without requiring extra bits.
The experimental results show that the proposed EDMS framework can get up to
35.31% BD-rate reduction over the HEVC-based (BPG) codec, 5% bitrate, and 24%
encoding time saving compare to the state-of-the-art semantic-based image
codec.
| [
{
"created": "Sun, 24 Jan 2021 04:11:05 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 05:50:57 GMT",
"version": "v2"
}
] | 2021-02-02 | [
[
"Hoang",
"Trinh Man",
""
],
[
"Zhou",
"Jinjia",
""
],
[
"Fan",
"Yibo",
""
]
] |
2101.09643 | Yu Fu | Yu Fu, Xiao-Jun Wu | A Dual-branch Network for Infrared and Visible Image Fusion | null | 25th International Conference on Pattern Recognition (ICPR2020) | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning is a rapidly developing approach in the field of infrared and
visible image fusion. In this context, the use of dense blocks in deep networks
significantly improves the utilization of shallow information, and the
combination of the Generative Adversarial Network (GAN) also improves the
fusion performance of two source images. We propose a new method based on dense
blocks and GANs , and we directly insert the input image-visible light image in
each layer of the entire network. We use SSIM and gradient loss functions that
are more consistent with perception instead of mean square error loss. After
the adversarial training between the generator and the discriminator, we show
that a trained end-to-end fusion network -- the generator network -- is finally
obtained. Our experiments show that the fused images obtained by our approach
achieve good score based on multiple evaluation indicators. Further, our fused
images have better visual effects in multiple sets of contrasts, which are more
satisfying to human visual perception.
| [
{
"created": "Sun, 24 Jan 2021 04:18:32 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Fu",
"Yu",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] |
2101.09710 | Gerrit Ecke | Gerrit A. Ecke, Harald M. Papp, Hanspeter A. Mallot | Exploitation of Image Statistics with Sparse Coding in the Case of
Stereo Vision | Author's accepted manuscript | Neural Networks, Volume 135, 2021, Pages 158-176 | 10.1016/j.neunet.2020.12.016 | null | cs.CV q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The sparse coding algorithm has served as a model for early processing in
mammalian vision. It has been assumed that the brain uses sparse coding to
exploit statistical properties of the sensory stream. We hypothesize that
sparse coding discovers patterns from the data set, which can be used to
estimate a set of stimulus parameters by simple readout. In this study, we
chose a model of stereo vision to test our hypothesis. We used the Locally
Competitive Algorithm (LCA), followed by a na\"ive Bayes classifier, to infer
stereo disparity. From the results we report three observations. First,
disparity inference was successful with this naturalistic processing pipeline.
Second, an expanded, highly redundant representation is required to robustly
identify the input patterns. Third, the inference error can be predicted from
the number of active coefficients in the LCA representation. We conclude that
sparse coding can generate a suitable general representation for subsequent
inference tasks. Keywords: Sparse coding; Locally Competitive Algorithm (LCA);
Efficient coding; Compact code; Probabilistic inference; Stereo vision
| [
{
"created": "Sun, 24 Jan 2021 12:45:25 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jan 2021 22:24:16 GMT",
"version": "v2"
}
] | 2021-01-28 | [
[
"Ecke",
"Gerrit A.",
""
],
[
"Papp",
"Harald M.",
""
],
[
"Mallot",
"Hanspeter A.",
""
]
] |
2101.09721 | Fabio Ferreira | Fabio Ferreira, Thomas Nierhoff, Frank Hutter | Learning Synthetic Environments for Reinforcement Learning with
Evolution Strategies | null | AAAI 2021 Meta-Learning Workshop | null | null | cs.LG cs.AI cs.NE | http://creativecommons.org/licenses/by/4.0/ | This work explores learning agent-agnostic synthetic environments (SEs) for
Reinforcement Learning. SEs act as a proxy for target environments and allow
agents to be trained more efficiently than when directly trained on the target
environment. We formulate this as a bi-level optimization problem and represent
an SE as a neural network. By using Natural Evolution Strategies and a
population of SE parameter vectors, we train agents in the inner loop on
evolving SEs while in the outer loop we use the performance on the target task
as a score for meta-updating the SE population. We show empirically that our
method is capable of learning SEs for two discrete-action-space tasks
(CartPole-v0 and Acrobot-v1) that allow us to train agents more robustly and
with up to 60% fewer steps. Not only do we show in experiments with 4000
evaluations that the SEs are robust against hyperparameter changes such as the
learning rate, batch sizes and network sizes, we also show that SEs trained
with DDQN agents transfer in limited ways to a discrete-action-space version of
TD3 and very well to Dueling DDQN.
| [
{
"created": "Sun, 24 Jan 2021 14:16:13 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jan 2021 18:53:35 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2021 15:03:39 GMT",
"version": "v3"
}
] | 2021-02-09 | [
[
"Ferreira",
"Fabio",
""
],
[
"Nierhoff",
"Thomas",
""
],
[
"Hutter",
"Frank",
""
]
] |
2101.09745 | Julian Tanke | Julian Tanke, Juergen Gall | Iterative Greedy Matching for 3D Human Pose Tracking from Multiple Views | German Conference on Pattern Recognition 2019 | GCPR 2019, pages 537--550 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this work we propose an approach for estimating 3D human poses of multiple
people from a set of calibrated cameras. Estimating 3D human poses from
multiple views has several compelling properties: human poses are estimated
within a global coordinate space and multiple cameras provide an extended field
of view which helps in resolving ambiguities, occlusions and motion blur. Our
approach builds upon a real-time 2D multi-person pose estimation system and
greedily solves the association problem between multiple views. We utilize
bipartite matching to track multiple people over multiple frames. This proofs
to be especially efficient as problems associated with greedy matching such as
occlusion can be easily resolved in 3D. Our approach achieves state-of-the-art
results on popular benchmarks and may serve as a baseline for future work.
| [
{
"created": "Sun, 24 Jan 2021 16:28:10 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Tanke",
"Julian",
""
],
[
"Gall",
"Juergen",
""
]
] |
2101.09781 | Luca Guarnera | Oliver Giudice (1), Luca Guarnera (1 and 2), Sebastiano Battiato (1
and 2) ((1) University of Catania, (2) iCTLab s.r.l. - Spin-off of University
of Catania) | Fighting deepfakes by detecting GAN DCT anomalies | null | Journal Imaging 2021, 7(8), 128 | 10.3390/jimaging7080128 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To properly contrast the Deepfake phenomenon the need to design new Deepfake
detection algorithms arises; the misuse of this formidable A.I. technology
brings serious consequences in the private life of every involved person.
State-of-the-art proliferates with solutions using deep neural networks to
detect a fake multimedia content but unfortunately these algorithms appear to
be neither generalizable nor explainable. However, traces left by Generative
Adversarial Network (GAN) engines during the creation of the Deepfakes can be
detected by analyzing ad-hoc frequencies. For this reason, in this paper we
propose a new pipeline able to detect the so-called GAN Specific Frequencies
(GSF) representing a unique fingerprint of the different generative
architectures. By employing Discrete Cosine Transform (DCT), anomalous
frequencies were detected. The \BETA statistics inferred by the AC coefficients
distribution have been the key to recognize GAN-engine generated data.
Robustness tests were also carried out in order to demonstrate the
effectiveness of the technique using different attacks on images such as JPEG
Compression, mirroring, rotation, scaling, addition of random sized rectangles.
Experiments demonstrated that the method is innovative, exceeds the state of
the art and also give many insights in terms of explainability.
| [
{
"created": "Sun, 24 Jan 2021 19:45:11 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 13:24:33 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Feb 2021 10:07:55 GMT",
"version": "v3"
},
{
"created": "Wed, 11 Aug 2021 08:41:03 GMT",
"version": "v4"
}
] | 2021-08-12 | [
[
"Giudice",
"Oliver",
"",
"1 and 2"
],
[
"Guarnera",
"Luca",
"",
"1 and 2"
],
[
"Battiato",
"Sebastiano",
"",
"1\n and 2"
]
] |
2101.09788 | Stephan Meylan | Stephan C. Meylan, Sathvik Nair, Thomas L. Griffiths | Evaluating Models of Robust Word Recognition with Serial Reproduction | null | Cognition Volume 210, May 2021, 104553 | 10.1016/j.cognition.2020.104553 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spoken communication occurs in a "noisy channel" characterized by high levels
of environmental noise, variability within and between speakers, and lexical
and syntactic ambiguity. Given these properties of the received linguistic
input, robust spoken word recognition -- and language processing more generally
-- relies heavily on listeners' prior knowledge to evaluate whether candidate
interpretations of that input are more or less likely. Here we compare several
broad-coverage probabilistic generative language models in their ability to
capture human linguistic expectations. Serial reproduction, an experimental
paradigm where spoken utterances are reproduced by successive participants
similar to the children's game of "Telephone," is used to elicit a sample that
reflects the linguistic expectations of English-speaking adults. When we
evaluate a suite of probabilistic generative language models against the
yielded chains of utterances, we find that those models that make use of
abstract representations of preceding linguistic context (i.e., phrase
structure) best predict the changes made by people in the course of serial
reproduction. A logistic regression model predicting which words in an
utterance are most likely to be lost or changed in the course of spoken
transmission corroborates this result. We interpret these findings in light of
research highlighting the interaction of memory-based constraints and
representations in language processing.
| [
{
"created": "Sun, 24 Jan 2021 20:16:12 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Meylan",
"Stephan C.",
""
],
[
"Nair",
"Sathvik",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
2101.09799 | Anshul Jindal | Anshul Jindal, Paul Staab, Jorge Cardoso, Michael Gerndt and Vladimir
Podolskiy | Online Memory Leak Detection in the Cloud-based Infrastructures | 12 pages | International Workshop on Artificial Intelligence for IT
Operations (AIOPS) 2020 | 10.1007/978-3-030-76352-7_21 | null | cs.DC cs.AI | http://creativecommons.org/licenses/by/4.0/ | A memory leak in an application deployed on the cloud can affect the
availability and reliability of the application. Therefore, to identify and
ultimately resolve it quickly is highly important. However, in the production
environment running on the cloud, memory leak detection is a challenge without
the knowledge of the application or its internal object allocation details.
This paper addresses this challenge of online detection of memory leaks in
cloud-based infrastructure without having any internal application knowledge by
introducing a novel machine learning based algorithm Precog. This algorithm
solely uses one metric i.e the system's memory utilization on which the
application is deployed for the detection of a memory leak. The developed
algorithm's accuracy was tested on 60 virtual machines manually labeled memory
utilization data provided by our industry partner Huawei Munich Research Center
and it was found that the proposed algorithm achieves the accuracy score of
85\% with less than half a second prediction time per virtual machine.
| [
{
"created": "Sun, 24 Jan 2021 20:48:45 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Jindal",
"Anshul",
""
],
[
"Staab",
"Paul",
""
],
[
"Cardoso",
"Jorge",
""
],
[
"Gerndt",
"Michael",
""
],
[
"Podolskiy",
"Vladimir",
""
]
] |
2101.09864 | Wang Bo | Tao Li and Wang Bo and Chunyu Hu and Hong Kang and Hanruo Liu and Kai
Wang and Huazhu Fu | Applications of Deep Learning in Fundus Images: A Review | null | Medical Image Analysis 2021 | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of fundus images for the early screening of eye diseases is of great
clinical importance. Due to its powerful performance, deep learning is becoming
more and more popular in related applications, such as lesion segmentation,
biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it
is very necessary to summarize the recent developments in deep learning for
fundus images with a review paper. In this review, we introduce 143 application
papers with a carefully designed hierarchy. Moreover, 33 publicly available
datasets are presented. Summaries and analyses are provided for each task.
Finally, limitations common to all tasks are revealed and possible solutions
are given. We will also release and regularly update the state-of-the-art
results and newly-released datasets at https://github.com/nkicsl/Fundus Review
to adapt to the rapid development of this field.
| [
{
"created": "Mon, 25 Jan 2021 02:39:40 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Li",
"Tao",
""
],
[
"Bo",
"Wang",
""
],
[
"Hu",
"Chunyu",
""
],
[
"Kang",
"Hong",
""
],
[
"Liu",
"Hanruo",
""
],
[
"Wang",
"Kai",
""
],
[
"Fu",
"Huazhu",
""
]
] |
2101.09903 | Weixin Jiang | Weixin Jiang, Eric Schwenker, Trevor Spreadbury, Nicola Ferrier, Maria
K.Y. Chan, Oliver Cossairt | A Two-stage Framework for Compound Figure Separation | null | IEEE International Conference on Image Processing (ICIP), 2021,
pp. 1204-1208 | 10.1109/ICIP42928.2021.9506171 | null | cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific literature contains large volumes of complex, unstructured figures
that are compound in nature (i.e. composed of multiple images, graphs, and
drawings). Separation of these compound figures is critical for information
retrieval from these figures. In this paper, we propose a new strategy for
compound figure separation, which decomposes the compound figures into
constituent subfigures while preserving the association between the subfigures
and their respective caption components. We propose a two-stage framework to
address the proposed compound figure separation problem. In particular, the
subfigure label detection module detects all subfigure labels in the first
stage. Then, in the subfigure detection module, the detected subfigure labels
help to detect the subfigures by optimizing the feature selection process and
providing the global layout information as extra features. Extensive
experiments are conducted to validate the effectiveness and superiority of the
proposed framework, which improves the detection precision by 9%.
| [
{
"created": "Mon, 25 Jan 2021 05:43:36 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Oct 2021 04:50:35 GMT",
"version": "v2"
}
] | 2021-10-08 | [
[
"Jiang",
"Weixin",
""
],
[
"Schwenker",
"Eric",
""
],
[
"Spreadbury",
"Trevor",
""
],
[
"Ferrier",
"Nicola",
""
],
[
"Chan",
"Maria K. Y.",
""
],
[
"Cossairt",
"Oliver",
""
]
] |
2101.09983 | Stanislav Frolov | Stanislav Frolov, Tobias Hinz, Federico Raue, J\"orn Hees, Andreas
Dengel | Adversarial Text-to-Image Synthesis: A Review | Published at Neural Networks Journal, available at
https://www.sciencedirect.com/science/article/pii/S0893608021002823 | Neural Networks, 2021 | 10.1016/j.neunet.2021.07.019 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the advent of generative adversarial networks, synthesizing images from
textual descriptions has recently become an active research area. It is a
flexible and intuitive way for conditional image generation with significant
progress in the last years regarding visual realism, diversity, and semantic
alignment. However, the field still faces several challenges that require
further research efforts such as enabling the generation of high-resolution
images with multiple objects, and developing suitable and reliable evaluation
metrics that correlate with human judgement. In this review, we contextualize
the state of the art of adversarial text-to-image synthesis models, their
development since their inception five years ago, and propose a taxonomy based
on the level of supervision. We critically examine current strategies to
evaluate text-to-image synthesis models, highlight shortcomings, and identify
new areas of research, ranging from the development of better datasets and
evaluation metrics to possible improvements in architectural design and model
training. This review complements previous surveys on generative adversarial
networks with a focus on text-to-image synthesis which we believe will help
researchers to further advance the field.
| [
{
"created": "Mon, 25 Jan 2021 09:58:36 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 07:30:08 GMT",
"version": "v2"
}
] | 2021-10-07 | [
[
"Frolov",
"Stanislav",
""
],
[
"Hinz",
"Tobias",
""
],
[
"Raue",
"Federico",
""
],
[
"Hees",
"Jörn",
""
],
[
"Dengel",
"Andreas",
""
]
] |
2101.09995 | Vinodkumar Prabhakaran | Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi,
Vinodkumar Prabhakaran | Re-imagining Algorithmic Fairness in India and Beyond | null | Proceedings of the 2021 conference on Fairness, Accountability,
and Transparency | null | null | cs.CY cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional algorithmic fairness is West-centric, as seen in its sub-groups,
values, and methods. In this paper, we de-center algorithmic fairness and
analyse AI power in India. Based on 36 qualitative interviews and a discourse
analysis of algorithmic deployments in India, we find that several assumptions
of algorithmic fairness are challenged. We find that in India, data is not
always reliable due to socio-economic factors, ML makers appear to follow
double standards, and AI evokes unquestioning aspiration. We contend that
localising model fairness alone can be window dressing in India, where the
distance between models and oppressed communities is large. Instead, we
re-imagine algorithmic fairness in India and provide a roadmap to
re-contextualise data and models, empower oppressed communities, and enable
Fair-ML ecosystems.
| [
{
"created": "Mon, 25 Jan 2021 10:20:57 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 02:30:20 GMT",
"version": "v2"
}
] | 2021-01-28 | [
[
"Sambasivan",
"Nithya",
""
],
[
"Arnesen",
"Erin",
""
],
[
"Hutchinson",
"Ben",
""
],
[
"Doshi",
"Tulsee",
""
],
[
"Prabhakaran",
"Vinodkumar",
""
]
] |
2101.10115 | Iosu Rodr\'iguez-Mart\'inez | Martin Pap\v{c}o, Iosu Rodr\'iguez-Mart\'inez, Javier Fumanal-Idocin,
Abdulrahman H. Altalhi and Humberto Bustince | A fusion method for multi-valued data | null | Information Fusion, Volume 71, 2021, Pages 1-10 | 10.1016/j.inffus.2021.01.001 | null | cs.LG cs.AI cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper we propose an extension of the notion of deviation-based
aggregation function tailored to aggregate multidimensional data. Our objective
is both to improve the results obtained by other methods that try to select the
best aggregation function for a particular set of data, such as penalty
functions, and to reduce the temporal complexity required by such approaches.
We discuss how this notion can be defined and present three illustrative
examples of the applicability of our new proposal in areas where temporal
constraints can be strict, such as image processing, deep learning and decision
making, obtaining favourable results in the process.
| [
{
"created": "Mon, 25 Jan 2021 14:27:21 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Papčo",
"Martin",
""
],
[
"Rodríguez-Martínez",
"Iosu",
""
],
[
"Fumanal-Idocin",
"Javier",
""
],
[
"Altalhi",
"Abdulrahman H.",
""
],
[
"Bustince",
"Humberto",
""
]
] |
2101.10203 | Eli Schwartz | Eli Schwartz, Alex Bronstein, Raja Giryes | ISP Distillation | null | IEEE Open Journal of Signal Processing 2023 | 10.1109/OJSP.2023.3239819 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Nowadays, many of the images captured are `observed' by machines only and not
by humans, e.g., in autonomous systems. High-level machine vision models, such
as object recognition or semantic segmentation, assume images are transformed
into some canonical image space by the camera \ans{Image Signal Processor
(ISP)}. However, the camera ISP is optimized for producing visually pleasing
images for human observers and not for machines. Therefore, one may spare the
ISP compute time and apply vision models directly to RAW images. Yet, it has
been shown that training such models directly on RAW images results in a
performance drop. To mitigate this drop, we use a RAW and RGB image pairs
dataset, which can be easily acquired with no human labeling. We then train a
model that is applied directly to the RAW data by using knowledge distillation
such that the model predictions for RAW images will be aligned with the
predictions of an off-the-shelf pre-trained model for processed RGB images. Our
experiments show that our performance on RAW images for object classification
and semantic segmentation is significantly better than models trained on
labeled RAW images. It also reasonably matches the predictions of a pre-trained
model on processed RGB images, while saving the ISP compute overhead.
| [
{
"created": "Mon, 25 Jan 2021 16:12:24 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Sep 2022 09:02:28 GMT",
"version": "v2"
},
{
"created": "Thu, 4 May 2023 14:27:49 GMT",
"version": "v3"
}
] | 2023-05-05 | [
[
"Schwartz",
"Eli",
""
],
[
"Bronstein",
"Alex",
""
],
[
"Giryes",
"Raja",
""
]
] |
2101.10215 | Yakup Kutlu | Enver Kaan Alpturk, Yakup Kutlu | Analysis of Relation between Motor Activity and Imaginary EEG Records | 6 pages, 4 figures, Journal of Artificial Intellicence with
Application | Journal of Artificial Intellicence with Application, 2020 | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Electroencephalography (EEG) signals signals are often used to learn about
brain structure and to learn what thinking. EEG signals can be easily affected
by external factors. For this reason, they should be applied various
pre-process during their analysis. In this study, it is used the EEG signals
received from 109 subjects when opening and closing their right or left fists
and performing hand and foot movements and imagining the same movements. The
relationship between motor activities and imaginary of that motor activities
were investigated. Algorithms with high performance rates have been used for
feature extraction , selection and classification using the nearest neighbour
algorithm.
| [
{
"created": "Thu, 21 Jan 2021 05:02:05 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Alpturk",
"Enver Kaan",
""
],
[
"Kutlu",
"Yakup",
""
]
] |
2101.10241 | Qian Chen | Qian Chen, Ze Liu, Yi Zhang, Keren Fu, Qijun Zhao, Hongwei Du | RGB-D Salient Object Detection via 3D Convolutional Neural Networks | null | Proceedings of the AAAI Conference on Artificial Intelligence,
2021, 35(2), 1063-1071 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RGB-D salient object detection (SOD) recently has attracted increasing
research interest and many deep learning methods based on encoder-decoder
architectures have emerged. However, most existing RGB-D SOD models conduct
feature fusion either in the single encoder or the decoder stage, which hardly
guarantees sufficient cross-modal fusion ability. In this paper, we make the
first attempt in addressing RGB-D SOD through 3D convolutional neural networks.
The proposed model, named RD3D, aims at pre-fusion in the encoder stage and
in-depth fusion in the decoder stage to effectively promote the full
integration of RGB and depth streams. Specifically, RD3D first conducts
pre-fusion across RGB and depth modalities through an inflated 3D encoder, and
later provides in-depth feature fusion by designing a 3D decoder equipped with
rich back-projection paths (RBPP) for leveraging the extensive aggregation
ability of 3D convolutions. With such a progressive fusion strategy involving
both the encoder and decoder, effective and thorough interaction between the
two modalities can be exploited and boost the detection accuracy. Extensive
experiments on six widely used benchmark datasets demonstrate that RD3D
performs favorably against 14 state-of-the-art RGB-D SOD approaches in terms of
four key evaluation metrics. Our code will be made publicly available:
https://github.com/PPOLYpubki/RD3D.
| [
{
"created": "Mon, 25 Jan 2021 17:03:02 GMT",
"version": "v1"
}
] | 2021-08-19 | [
[
"Chen",
"Qian",
""
],
[
"Liu",
"Ze",
""
],
[
"Zhang",
"Yi",
""
],
[
"Fu",
"Keren",
""
],
[
"Zhao",
"Qijun",
""
],
[
"Du",
"Hongwei",
""
]
] |
2101.10248 | Jian-Qing Zheng | Jian-Qing Zheng, Ngee Han Lim, Bartlomiej W. Papiez | D-Net: Siamese based Network with Mutual Attention for Volume Alignment | this uploaded manuscript is another version of which published in:
International Workshop on Shape in Medical Imaging, Springer, 2020, pp. 73-84 | in: International Workshop on Shape in Medical Imaging, Springer,
2020, pp. 73-84 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alignment of contrast and non-contrast-enhanced imaging is essential for the
quantification of changes in several biomedical applications. In particular,
the extraction of cartilage shape from contrast-enhanced Computed Tomography
(CT) of tibiae requires accurate alignment of the bone, currently performed
manually. Existing deep learning-based methods for alignment require a common
template or are limited in rotation range. Therefore, we present a novel
network, D-net, to estimate arbitrary rotation and translation between 3D CT
scans that additionally does not require a prior standard template. D-net is an
extension to the branched Siamese encoder-decoder structure connected by new
mutual non-local links, which efficiently capture long-range connections of
similar features between two branches. The 3D supervised network is trained and
validated using preclinical CT scans of mouse tibiae with and without contrast
enhancement in cartilage. The presented results show a significant improvement
in the estimation of CT alignment, outperforming the current comparable
methods.
| [
{
"created": "Mon, 25 Jan 2021 17:24:16 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Zheng",
"Jian-Qing",
""
],
[
"Lim",
"Ngee Han",
""
],
[
"Papiez",
"Bartlomiej W.",
""
]
] |
2101.10263 | Yakup Kutlu | Gokhan Altan, Yakup Kutlu | Generative Autoencoder Kernels on Deep Learning for Brain Activity
Analysis | 12 pages, 2 figures, Natural and Engineering Sciences | Natural and Engineering Sciences, 2018 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep Learning (DL) is a two-step classification model that consists feature
learning, generating feature representations using unsupervised ways and the
supervised learning stage at the last step of model using at least two hidden
layers on the proposed structures by fully connected layers depending on of the
artificial neural networks. The optimization of the predefined classification
parameters for the supervised models eases reaching the global optimality with
exact zero training error. The autoencoder (AE) models are the highly
generalized ways of the unsupervised stages for the DL to define the output
weights of the hidden neurons with various representations. As alternatively to
the conventional Extreme Learning Machines (ELM) AE, Hessenberg
decomposition-based ELM autoencoder (HessELM-AE) is a novel kernel to generate
different presentations of the input data within the intended sizes of the
models. The aim of the study is analyzing the performance of the novel Deep AE
kernel for clinical availability on electroencephalogram (EEG) with stroke
patients. The slow cortical potentials (SCP) training in stroke patients during
eight neurofeedback sessions were analyzed using Hilbert-Huang Transform. The
statistical features of different frequency modulations were fed into the Deep
ELM model for generative AE kernels. The novel Deep ELM-AE kernels have
discriminated the brain activity with high classification performances for
positivity and negativity tasks in stroke patients.
| [
{
"created": "Thu, 21 Jan 2021 08:19:47 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Altan",
"Gokhan",
""
],
[
"Kutlu",
"Yakup",
""
]
] |
2101.10265 | Yakup Kutlu | Gokhan Altan, Yakup Kutlu | Superiorities of Deep Extreme Learning Machines against Convolutional
Neural Networks | 7 pages, 2 figures, Natural and Engineering Sciences | Natural and Engineering Sciences, 2018 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep Learning (DL) is a machine learning procedure for artificial
intelligence that analyzes the input data in detail by increasing neuron sizes
and number of the hidden layers. DL has a popularity with the common
improvements on the graphical processing unit capabilities. Increasing number
of the neuron sizes at each layer and hidden layers is directly related to the
computation time and training speed of the classifier models. The
classification parameters including neuron weights, output weights, and biases
need to be optimized for obtaining an optimum model. Most of the popular DL
algorithms require long training times for optimization of the parameters with
feature learning progresses and back-propagated training procedures. Reducing
the training time and providing a real-time decision system are the basic focus
points of the novel approaches. Deep Extreme Learning machines (Deep ELM)
classifier model is one of the fastest and effective way to meet fast
classification problems. In this study, Deep ELM model, its superiorities and
weaknesses are discussed, the problems that are more suitable for the
classifiers against Convolutional neural network based DL algorithms.
| [
{
"created": "Thu, 21 Jan 2021 08:22:18 GMT",
"version": "v1"
}
] | 2021-01-26 | [
[
"Altan",
"Gokhan",
""
],
[
"Kutlu",
"Yakup",
""
]
] |
2101.10292 | Xiaoqian Wu | Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Xijie Huang, Liang Xu, Cewu Lu | Transferable Interactiveness Knowledge for Human-Object Interaction
Detection | TPAMI version of our CVPR2019 paper with a new benchmark
PaStaNet-HOI. Code:
https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network. arXiv
admin note: substantial text overlap with arXiv:1811.08264 | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2021 | 10.1109/TPAMI.2021.3054048 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-Object Interaction (HOI) detection is an important problem to
understand how humans interact with objects. In this paper, we explore
interactiveness knowledge which indicates whether a human and an object
interact with each other or not. We found that interactiveness knowledge can be
learned across HOI datasets and bridge the gap between diverse HOI category
settings. Our core idea is to exploit an interactiveness network to learn the
general interactiveness knowledge from multiple HOI datasets and perform
Non-Interaction Suppression (NIS) before HOI classification in inference. On
account of the generalization ability of interactiveness, interactiveness
network is a transferable knowledge learner and can be cooperated with any HOI
detection models to achieve desirable results. We utilize the human instance
and body part features together to learn the interactiveness in hierarchical
paradigm, i.e., instance-level and body part-level interactivenesses.
Thereafter, a consistency task is proposed to guide the learning and extract
deeper interactive visual clues. We extensively evaluate the proposed method on
HICO-DET, V-COCO, and a newly constructed PaStaNet-HOI dataset. With the
learned interactiveness, our method outperforms state-of-the-art HOI detection
methods, verifying its efficacy and flexibility. Code is available at
https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network.
| [
{
"created": "Mon, 25 Jan 2021 18:21:07 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Feb 2021 04:21:24 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Mar 2021 10:04:29 GMT",
"version": "v3"
}
] | 2021-04-13 | [
[
"Li",
"Yong-Lu",
""
],
[
"Liu",
"Xinpeng",
""
],
[
"Wu",
"Xiaoqian",
""
],
[
"Huang",
"Xijie",
""
],
[
"Xu",
"Liang",
""
],
[
"Lu",
"Cewu",
""
]
] |
2101.10371 | Juan Pablo Rodr\'iguez G\'omez | J.P. Rodr\'iguez-G\'omez, R. Tapia, J. L. Paneque, P. Grau, A. G\'omez
Egu\'iluz, J.R. Mart\'inez-de Dios and A. Ollero | The GRIFFIN Perception Dataset: Bridging the Gap Between Flapping-Wing
Flight and Robotic Perception | 8 pages, 22 figures, Video: "this https URL
https://www.youtube.com/watch?v=ymCRnlWxX24&t=35s" | IEEE Robotics and Automation Letters (RA-L), 2021 | 10.1109/LRA.2021.3056348 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of automatic perception systems and techniques for
bio-inspired flapping-wing robots is severely hampered by the high technical
complexity of these platforms and the installation of onboard sensors and
electronics. Besides, flapping-wing robot perception suffers from high
vibration levels and abrupt movements during flight, which cause motion blur
and strong changes in lighting conditions. This paper presents a perception
dataset for bird-scale flapping-wing robots as a tool to help alleviate the
aforementioned problems. The presented data include measurements from onboard
sensors widely used in aerial robotics and suitable to deal with the perception
challenges of flapping-wing robots, such as an event camera, a conventional
camera, and two Inertial Measurement Units (IMUs), as well as ground truth
measurements from a laser tracker or a motion capture system. A total of 21
datasets of different types of flights were collected in three different
scenarios (one indoor and two outdoor). To the best of the authors' knowledge
this is the first dataset for flapping-wing robot perception.
| [
{
"created": "Mon, 25 Jan 2021 19:42:13 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2021 18:31:48 GMT",
"version": "v2"
}
] | 2021-02-19 | [
[
"Rodríguez-Gómez",
"J. P.",
""
],
[
"Tapia",
"R.",
""
],
[
"Paneque",
"J. L.",
""
],
[
"Grau",
"P.",
""
],
[
"Eguíluz",
"A. Gómez",
""
],
[
"Dios",
"J. R. Martínez-de",
""
],
[
"Ollero",
"A.",
""
]
] |
2101.10435 | Maria Leonor Pacheco | Manuel Widmoser, Maria Leonor Pacheco, Jean Honorio, Dan Goldwasser | Randomized Deep Structured Prediction for Discourse-Level Processing | Accepted to EACL 2021 | Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, 2021 | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expressive text encoders such as RNNs and Transformer Networks have been at
the center of NLP models in recent work. Most of the effort has focused on
sentence-level tasks, capturing the dependencies between words in a single
sentence, or pairs of sentences. However, certain tasks, such as argumentation
mining, require accounting for longer texts and complicated structural
dependencies between them. Deep structured prediction is a general framework to
combine the complementary strengths of expressive neural encoders and
structured inference for highly structured domains. Nevertheless, when the need
arises to go beyond sentences, most work relies on combining the output scores
of independently trained classifiers. One of the main reasons for this is that
constrained inference comes at a high computational cost. In this paper, we
explore the use of randomized inference to alleviate this concern and show that
we can efficiently leverage deep structured prediction and expressive neural
encoders for a set of tasks involving complicated argumentative structures.
| [
{
"created": "Mon, 25 Jan 2021 21:49:32 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Widmoser",
"Manuel",
""
],
[
"Pacheco",
"Maria Leonor",
""
],
[
"Honorio",
"Jean",
""
],
[
"Goldwasser",
"Dan",
""
]
] |
2101.10445 | Rateb Jabbar Mr. | Safa Ayadi, Ahmed ben said, Rateb Jabbar, Chafik Aloulou, Achraf
Chabbouh, and Ahmed Ben Achballah | Dairy Cow rumination detection: A deep learning approach | 17 pages, 6 figures, 4 tables | International Workshop on Distributed Computing for Emerging Smart
Networks. Springer, Cham, 2020 | 10.1007/978-3-030-65810-6_7 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Cattle activity is an essential index for monitoring health and welfare of
the ruminants. Thus, changes in the livestock behavior are a critical indicator
for early detection and prevention of several diseases. Rumination behavior is
a significant variable for tracking the development and yield of animal
husbandry. Therefore, various monitoring methods and measurement equipment have
been used to assess cattle behavior. However, these modern attached devices are
invasive, stressful and uncomfortable for the cattle and can influence
negatively the welfare and diurnal behavior of the animal. Multiple research
efforts addressed the problem of rumination detection by adopting new methods
by relying on visual features. However, they only use few postures of the dairy
cow to recognize the rumination or feeding behavior. In this study, we
introduce an innovative monitoring method using Convolution Neural Network
(CNN)-based deep learning models. The classification process is conducted under
two main labels: ruminating and other, using all cow postures captured by the
monitoring camera. Our proposed system is simple and easy-to-use which is able
to capture long-term dynamics using a compacted representation of a video in a
single 2D image. This method proved efficiency in recognizing the rumination
behavior with 95%, 98% and 98% of average accuracy, recall and precision,
respectively.
| [
{
"created": "Thu, 7 Jan 2021 07:33:32 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Ayadi",
"Safa",
""
],
[
"said",
"Ahmed ben",
""
],
[
"Jabbar",
"Rateb",
""
],
[
"Aloulou",
"Chafik",
""
],
[
"Chabbouh",
"Achraf",
""
],
[
"Achballah",
"Ahmed Ben",
""
]
] |
2101.10480 | EPTCS | Spencer Breiner (National Institute of Standards and Technology), John
S. Nolan (University of Maryland) | Symmetric Monoidal Categories with Attributes | In Proceedings ACT 2020, arXiv:2101.07888 | EPTCS 333, 2021, pp. 33-48 | 10.4204/EPTCS.333.3 | null | math.CT cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When designing plans in engineering, it is often necessary to consider
attributes associated to objects, e.g. the location of a robot. Our aim in this
paper is to incorporate attributes into existing categorical formalisms for
planning, namely those based on symmetric monoidal categories and string
diagrams. To accomplish this, we define a notion of a "symmetric monoidal
category with attributes." This is a symmetric monoidal category in which
objects are equipped with retrievable information and where the interactions
between objects and information are governed by an "attribute structure." We
discuss examples and semantics of such categories in the context of robotics to
illustrate our definition.
| [
{
"created": "Tue, 26 Jan 2021 00:01:45 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Breiner",
"Spencer",
"",
"National Institute of Standards and Technology"
],
[
"Nolan",
"John S.",
"",
"University of Maryland"
]
] |
2101.10524 | Abhinav Arora | Arash Einolghozati, Abhinav Arora, Lorena Sainz-Maza Lecanda, Anuj
Kumar, Sonal Gupta | El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic
Parsing | null | EACL 2021 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Being able to parse code-switched (CS) utterances, such as Spanish+English or
Hindi+English, is essential to democratize task-oriented semantic parsing
systems for certain locales. In this work, we focus on Spanglish
(Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances
alongside their semantic parses. We examine the CS generalizability of various
Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language
models when data for only one language is present. As such, we focus on
improving the pre-trained models for the case when only English corpus
alongside either zero or a few CS training instances are available. We propose
two data augmentation methods for the zero-shot and the few-shot settings:
fine-tune using translate-and-align and augment using a generation model
followed by match-and-filter. Combining the few-shot setting with the above
improvements decreases the initial 30-point accuracy gap between the zero-shot
and the full-data settings by two thirds.
| [
{
"created": "Tue, 26 Jan 2021 02:40:44 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 04:28:49 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Jan 2021 08:09:08 GMT",
"version": "v3"
}
] | 2021-01-29 | [
[
"Einolghozati",
"Arash",
""
],
[
"Arora",
"Abhinav",
""
],
[
"Lecanda",
"Lorena Sainz-Maza",
""
],
[
"Kumar",
"Anuj",
""
],
[
"Gupta",
"Sonal",
""
]
] |
2101.10532 | Muhammad Ahmad | Muhammad Ahmad, Sidrah Shabbir, Rana Aamir Raza, Manuel Mazzara,
Salvatore Distefano, Adil Mehmood Khan | Hyperspectral Image Classification: Artifacts of Dimension Reduction on
Hybrid CNN | 9 pages, 9 figures | 2021 | null | https://doi.org/10.1016/j.ijleo.2021.167757 | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Convolutional Neural Networks (CNN) has been extensively studied for
Hyperspectral Image Classification (HSIC) more specifically, 2D and 3D CNN
models have proved highly efficient in exploiting the spatial and spectral
information of Hyperspectral Images. However, 2D CNN only considers the spatial
information and ignores the spectral information whereas 3D CNN jointly
exploits spatial-spectral information at a high computational cost. Therefore,
this work proposed a lightweight CNN (3D followed by 2D-CNN) model which
significantly reduces the computational cost by distributing spatial-spectral
feature extraction across a lighter model alongside a preprocessing that has
been carried out to improve the classification results. Five benchmark
Hyperspectral datasets (i.e., SalinasA, Salinas, Indian Pines, Pavia
University, Pavia Center, and Botswana) are used for experimental evaluation.
The experimental results show that the proposed pipeline outperformed in terms
of generalization performance, statistical significance, and computational
complexity, as compared to the state-of-the-art 2D/3D CNN models except
commonly used computationally expensive design choices.
| [
{
"created": "Mon, 25 Jan 2021 18:43:57 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Ahmad",
"Muhammad",
""
],
[
"Shabbir",
"Sidrah",
""
],
[
"Raza",
"Rana Aamir",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Distefano",
"Salvatore",
""
],
[
"Khan",
"Adil Mehmood",
""
]
] |
2101.10539 | Mohammed Mustafa Abdelgwad | Mohammed M.Abdelgwad, Taysir Hassan A Soliman, Ahmed I.Taloba, Mohamed
Fawzy Farghaly | Arabic aspect based sentiment analysis using bidirectional GRU based
models | null | Journal of King Saud University - Computer and Information
Sciences (2021) | 10.1016/j.jksuci.2021.08.030 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aspect-based Sentiment analysis (ABSA) accomplishes a fine-grained analysis
that defines the aspects of a given document or sentence and the sentiments
conveyed regarding each aspect. This level of analysis is the most detailed
version that is capable of exploring the nuanced viewpoints of the reviews. The
bulk of study in ABSA focuses on English with very little work available in
Arabic. Most previous work in Arabic has been based on regular methods of
machine learning that mainly depends on a group of rare resources and tools for
analyzing and processing Arabic content such as lexicons, but the lack of those
resources presents another challenge. In order to address these challenges,
Deep Learning (DL)-based methods are proposed using two models based on Gated
Recurrent Units (GRU) neural networks for ABSA. The first is a DL model that
takes advantage of word and character representations by combining
bidirectional GRU, Convolutional Neural Network (CNN), and Conditional Random
Field (CRF) making up the (BGRU-CNN-CRF) model to extract the main opinionated
aspects (OTE). The second is an interactive attention network based on
bidirectional GRU (IAN-BGRU) to identify sentiment polarity toward extracted
aspects. We evaluated our models using the benchmarked Arabic hotel reviews
dataset. The results indicate that the proposed methods are better than
baseline research on both tasks having 39.7% enhancement in F1-score for
opinion target extraction (T2) and 7.58% in accuracy for aspect-based sentiment
polarity classification (T3). Achieving F1 score of 70.67% for T2, and accuracy
of 83.98% for T3.
| [
{
"created": "Sat, 23 Jan 2021 02:54:30 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2021 05:01:16 GMT",
"version": "v2"
},
{
"created": "Sun, 7 Mar 2021 10:32:15 GMT",
"version": "v3"
},
{
"created": "Wed, 6 Oct 2021 23:31:30 GMT",
"version": "v4"
}
] | 2021-10-08 | [
[
"Abdelgwad",
"Mohammed M.",
""
],
[
"Soliman",
"Taysir Hassan A",
""
],
[
"Taloba",
"Ahmed I.",
""
],
[
"Farghaly",
"Mohamed Fawzy",
""
]
] |
2101.10556 | Wenliang Qian | Wenliang Qian, Yang Xu, Wangmeng Zuo, Hui Li | Self Sparse Generative Adversarial Networks | null | CAAI Artificial Intelligence Research. 2022, 1 (1): 68-78 | 10.26599/AIR.2022.9150005 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generative Adversarial Networks (GANs) are an unsupervised generative model
that learns data distribution through adversarial training. However, recent
experiments indicated that GANs are difficult to train due to the requirement
of optimization in the high dimensional parameter space and the zero gradient
problem. In this work, we propose a Self Sparse Generative Adversarial Network
(Self-Sparse GAN) that reduces the parameter space and alleviates the zero
gradient problem. In the Self-Sparse GAN, we design a Self-Adaptive Sparse
Transform Module (SASTM) comprising the sparsity decomposition and feature-map
recombination, which can be applied on multi-channel feature maps to obtain
sparse feature maps. The key idea of Self-Sparse GAN is to add the SASTM
following every deconvolution layer in the generator, which can adaptively
reduce the parameter space by utilizing the sparsity in multi-channel feature
maps. We theoretically prove that the SASTM can not only reduce the search
space of the convolution kernel weight of the generator but also alleviate the
zero gradient problem by maintaining meaningful features in the Batch
Normalization layer and driving the weight of deconvolution layers away from
being negative. The experimental results show that our method achieves the best
FID scores for image generation compared with WGAN-GP on MNIST, Fashion-MNIST,
CIFAR-10, STL-10, mini-ImageNet, CELEBA-HQ, and LSUN bedrooms, and the relative
decrease of FID is 4.76% ~ 21.84%.
| [
{
"created": "Tue, 26 Jan 2021 04:49:12 GMT",
"version": "v1"
}
] | 2022-10-11 | [
[
"Qian",
"Wenliang",
""
],
[
"Xu",
"Yang",
""
],
[
"Zuo",
"Wangmeng",
""
],
[
"Li",
"Hui",
""
]
] |
2101.10589 | Mehul S. Raval | Snehal Rajput, Rupal Agravat, Mohendra Roy, Mehul S Raval | Glioblastoma Multiforme Patient Survival Prediction | 10 pages, 9 figures | 2021 International Conference on Medical Imaging and
Computer-Aided Diagnosis (MICAD 2021) | null | null | eess.IV cs.CV stat.AP | http://creativecommons.org/licenses/by-sa/4.0/ | Glioblastoma Multiforme is a very aggressive type of brain tumor. Due to
spatial and temporal intra-tissue inhomogeneity, location and the extent of the
cancer tissue, it is difficult to detect and dissect the tumor regions. In this
paper, we propose survival prognosis models using four regressors operating on
handcrafted image-based and radiomics features. We hypothesize that the
radiomics shape features have the highest correlation with survival prediction.
The proposed approaches were assessed on the Brain Tumor Segmentation
(BraTS-2020) challenge dataset. The highest accuracy of image features with
random forest regressor approach was 51.5\% for the training and 51.7\% for the
validation dataset. The gradient boosting regressor with shape features gave an
accuracy of 91.5\% and 62.1\% on training and validation datasets respectively.
It is better than the BraTS 2020 survival prediction challenge winners on the
training and validation datasets. Our work shows that handcrafted features
exhibit a strong correlation with survival prediction. The consensus based
regressor with gradient boosting and radiomics shape features is the best
combination for survival prediction.
| [
{
"created": "Tue, 26 Jan 2021 06:47:14 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Rajput",
"Snehal",
""
],
[
"Agravat",
"Rupal",
""
],
[
"Roy",
"Mohendra",
""
],
[
"Raval",
"Mehul S",
""
]
] |
2101.10599 | Mehul S. Raval | Rupal Agravat, Mehul S Raval | A Survey and Analysis on Automated Glioma Brain Tumor Segmentation and
Overall Patient Survival Prediction | 40 pages, 19 figures, 11 Tables | Archives of Computational Methods in Engineering, Springer, 2021 | 10.1007/s11831-021-09559-w | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Glioma is the most deadly brain tumor with high mortality. Treatment planning
by human experts depends on the proper diagnosis of physical symptoms along
with Magnetic Resonance(MR) image analysis. Highly variability of a brain tumor
in terms of size, shape, location, and a high volume of MR images makes the
analysis time-consuming. Automatic segmentation methods achieve a reduction in
time with excellent reproducible results. The article aims to survey the
advancement of automated methods for Glioma brain tumor segmentation. It is
also essential to make an objective evaluation of various models based on the
benchmark. Therefore, the 2012 - 2019 BraTS challenges database evaluates
state-of-the-art methods. The complexity of tasks under the challenge has grown
from segmentation (Task1) to overall survival prediction (Task 2) to
uncertainty prediction for classification (Task 3). The paper covers the
complete gamut of brain tumor segmentation using handcrafted features to deep
neural network models for Task 1. The aim is to showcase a complete change of
trends in automated brain tumor models. The paper also covers end to end joint
models involving brain tumor segmentation and overall survival prediction. All
the methods are probed, and parameters that affect performance are tabulated
and analyzed.
| [
{
"created": "Tue, 26 Jan 2021 07:22:52 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Mar 2021 15:34:56 GMT",
"version": "v2"
}
] | 2021-03-09 | [
[
"Agravat",
"Rupal",
""
],
[
"Raval",
"Mehul S",
""
]
] |
2101.10629 | Gennaro Vessio Dr. | Eufemia Lella, Gennaro Vessio | Ensembling complex network 'perspectives' for mild cognitive impairment
detection with artificial neural networks | null | Pattern Recognition Letters, Volume 136, August 2020, Pages
168-174 | 10.1016/j.patrec.2020.06.001 | null | cs.CV eess.IV q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we propose a novel method for mild cognitive impairment
detection based on jointly exploiting the complex network and the neural
network paradigm. In particular, the method is based on ensembling different
brain structural "perspectives" with artificial neural networks. On one hand,
these perspectives are obtained with complex network measures tailored to
describe the altered brain connectivity. In turn, the brain reconstruction is
obtained by combining diffusion-weighted imaging (DWI) data to tractography
algorithms. On the other hand, artificial neural networks provide a means to
learn a mapping from topological properties of the brain to the presence or
absence of cognitive decline. The effectiveness of the method is studied on a
well-known benchmark data set in order to evaluate if it can provide an
automatic tool to support the early disease diagnosis. Also, the effects of
balancing issues are investigated to further assess the reliability of the
complex network approach to DWI data.
| [
{
"created": "Tue, 26 Jan 2021 08:38:11 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Lella",
"Eufemia",
""
],
[
"Vessio",
"Gennaro",
""
]
] |
2101.10710 | Mohammad Naser Sabet Jahromi | Satya M. Muddamsetty, Mohammad N. S. Jahromi, Andreea E. Ciontos,
Laura M. Fenoy, Thomas B. Moeslund | Visual explanation of black-box model: Similarity Difference and
Uniqueness (SIDU) method | null | Pattern Recognition 127 (2022): 108604 | null | null | cs.CV cs.AI cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable Artificial Intelligence (XAI) has in recent years become a
well-suited framework to generate human understandable explanations of
"black-box" models. In this paper, a novel XAI visual explanation algorithm
known as the Similarity Difference and Uniqueness (SIDU) method that can
effectively localize entire object regions responsible for prediction is
presented in full detail. The SIDU algorithm robustness and effectiveness is
analyzed through various computational and human subject experiments. In
particular, the SIDU algorithm is assessed using three different types of
evaluations (Application, Human and Functionally-Grounded) to demonstrate its
superior performance. The robustness of SIDU is further studied in the presence
of adversarial attack on "black-box" models to better understand its
performance. Our code is available at:
https://github.com/satyamahesh84/SIDU_XAI_CODE.
| [
{
"created": "Tue, 26 Jan 2021 11:13:50 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Jul 2022 18:07:56 GMT",
"version": "v2"
}
] | 2022-07-12 | [
[
"Muddamsetty",
"Satya M.",
""
],
[
"Jahromi",
"Mohammad N. S.",
""
],
[
"Ciontos",
"Andreea E.",
""
],
[
"Fenoy",
"Laura M.",
""
],
[
"Moeslund",
"Thomas B.",
""
]
] |
2101.10747 | Mazen Abdelfattah Mr | Mazen Abdelfattah, Kaiwen Yuan, Z. Jane Wang, Rabab Ward | Towards Universal Physical Attacks On Cascaded Camera-Lidar 3D Object
Detection Models | null | 2021 IEEE International Conference on Image Processing (ICIP) | 10.1109/ICIP42928.2021.9506016 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | We propose a universal and physically realizable adversarial attack on a
cascaded multi-modal deep learning network (DNN), in the context of
self-driving cars. DNNs have achieved high performance in 3D object detection,
but they are known to be vulnerable to adversarial attacks. These attacks have
been heavily investigated in the RGB image domain and more recently in the
point cloud domain, but rarely in both domains simultaneously - a gap to be
filled in this paper. We use a single 3D mesh and differentiable rendering to
explore how perturbing the mesh's geometry and texture can reduce the
robustness of DNNs to adversarial attacks. We attack a prominent cascaded
multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark,
we showed that the proposed universal multi-modal attack was successful in
reducing the model's ability to detect a car by nearly 73%. This work can aid
in the understanding of what the cascaded RGB-point cloud DNN learns and its
vulnerability to adversarial attacks.
| [
{
"created": "Tue, 26 Jan 2021 12:40:34 GMT",
"version": "v1"
},
{
"created": "Sun, 31 Jan 2021 18:40:27 GMT",
"version": "v2"
}
] | 2021-09-30 | [
[
"Abdelfattah",
"Mazen",
""
],
[
"Yuan",
"Kaiwen",
""
],
[
"Wang",
"Z. Jane",
""
],
[
"Ward",
"Rabab",
""
]
] |
2101.10759 | Xutan Peng | Xutan Peng, Yi Zheng, Chenghua Lin, Advaith Siddharthan | Summarising Historical Text in Modern Languages | To appear at EACL 2021 | EACL 2021 | 10.18653/v1/2021.eacl-main.273 | null | cs.CL cs.AI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the task of historical text summarisation, where documents in
historical forms of a language are summarised in the corresponding modern
language. This is a fundamentally important routine to historians and digital
humanities researchers but has never been automated. We compile a high-quality
gold-standard text summarisation dataset, which consists of historical German
and Chinese news from hundreds of years ago summarised in modern German or
Chinese. Based on cross-lingual transfer learning techniques, we propose a
summarisation model that can be trained even with no cross-lingual (historical
to modern) parallel data, and further benchmark it against state-of-the-art
algorithms. We report automatic and human evaluations that distinguish the
historic to modern language summarisation task from standard cross-lingual
summarisation (i.e., modern to modern language), highlight the distinctness and
value of our dataset, and demonstrate that our transfer learning approach
outperforms standard cross-lingual benchmarks on this task.
| [
{
"created": "Tue, 26 Jan 2021 13:00:07 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 04:17:02 GMT",
"version": "v2"
}
] | 2022-01-25 | [
[
"Peng",
"Xutan",
""
],
[
"Zheng",
"Yi",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Siddharthan",
"Advaith",
""
]
] |
2101.10760 | Xiangyu Xu | Xiangyu Xu, Muchen Li, Wenxiu Sun, Ming-Hsuan Yang | Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising | Project page: https://sites.google.com/view/xiangyuxu/denoise_stpan.
arXiv admin note: substantial text overlap with arXiv:1904.06903 | IEEE Transactions on Image Processing 29 (2020): 7153-7165 | 10.1109/TIP.2020.2999209 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Existing denoising methods typically restore clear results by aggregating
pixels from the noisy input. Instead of relying on hand-crafted aggregation
schemes, we propose to explicitly learn this process with deep neural networks.
We present a spatial pixel aggregation network and learn the pixel sampling and
averaging strategies for image denoising. The proposed model naturally adapts
to image structures and can effectively improve the denoised results.
Furthermore, we develop a spatio-temporal pixel aggregation network for video
denoising to efficiently sample pixels across the spatio-temporal space. Our
method is able to solve the misalignment issues caused by large motion in
dynamic scenes. In addition, we introduce a new regularization term for
effectively training the proposed video denoising model. We present extensive
analysis of the proposed method and demonstrate that our model performs
favorably against the state-of-the-art image and video denoising approaches on
both synthetic and real-world data.
| [
{
"created": "Tue, 26 Jan 2021 13:00:46 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Xu",
"Xiangyu",
""
],
[
"Li",
"Muchen",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] |
2101.10775 | Leonardo Parisi | Andrea Cavagna, Xiao Feng, Stefania Melillo, Leonardo Parisi, Lorena
Postiglione, Pablo Villegas | CoMo: A novel co-moving 3D camera system | null | IEEE Trans. Instrum. Meas. 70: 1-16 (2021) | 10.1109/TIM.2021.3074388 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the theoretical interest in reconstructing long 3D trajectories
of individual birds in large flocks, we developed CoMo, a co-moving camera
system of two synchronized high speed cameras coupled with rotational stages,
which allow us to dynamically follow the motion of a target flock. With the
rotation of the cameras we overcome the limitations of standard static systems
that restrict the duration of the collected data to the short interval of time
in which targets are in the cameras common field of view, but at the same time
we change in time the external parameters of the system, which have then to be
calibrated frame-by-frame. We address the calibration of the external
parameters measuring the position of the cameras and their three angles of yaw,
pitch and roll in the system "home" configuration (rotational stage at an angle
equal to 0deg and combining this static information with the time dependent
rotation due to the stages. We evaluate the robustness and accuracy of the
system by comparing reconstructed and measured 3D distances in what we call 3D
tests, which show a relative error of the order of 1%. The novelty of the work
presented in this paper is not only on the system itself, but also on the
approach we use in the tests, which we show to be a very powerful tool in
detecting and fixing calibration inaccuracies and that, for this reason, may be
relevant for a broad audience.
| [
{
"created": "Tue, 26 Jan 2021 13:29:13 GMT",
"version": "v1"
}
] | 2022-09-16 | [
[
"Cavagna",
"Andrea",
""
],
[
"Feng",
"Xiao",
""
],
[
"Melillo",
"Stefania",
""
],
[
"Parisi",
"Leonardo",
""
],
[
"Postiglione",
"Lorena",
""
],
[
"Villegas",
"Pablo",
""
]
] |
2101.10813 | Elizabeth J Carter | Stephanie Rosenthal and Elizabeth J. Carter | Impact of Explanation on Trust of a Novel Mobile Robot | 9 pages, 3 figures | Proceedings of the AAAI Fall Symposium Series - Artificial
Intelligence for Human-Robot Interaction: Trust Explainability in Artificial
Intelligence for Human-Robot Interaction AI-HRI (AI-HRI '20), November 13-14,
2020, Washington DC, USA | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One challenge with introducing robots into novel environments is misalignment
between supervisor expectations and reality, which can greatly affect a user's
trust and continued use of the robot. We performed an experiment to test
whether the presence of an explanation of expected robot behavior affected a
supervisor's trust in an autonomous robot. We measured trust both subjectively
through surveys and objectively through a dual-task experiment design to
capture supervisors' neglect tolerance (i.e., their willingness to perform
their own task while the robot is acting autonomously). Our objective results
show that explanations can help counteract the novelty effect of seeing a new
robot perform in an unknown environment. Participants who received an
explanation of the robot's behavior were more likely to focus on their own task
at the risk of neglecting their robot supervision task during the first trials
of the robot's behavior compared to those who did not receive an explanation.
However, this effect diminished after seeing multiple trials, and participants
who received explanations were equally trusting of the robot's behavior as
those who did not receive explanations. Interestingly, participants were not
able to identify their own changes in trust through their survey responses,
demonstrating that the dual-task design measured subtler changes in a
supervisor's trust.
| [
{
"created": "Tue, 26 Jan 2021 14:36:26 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Rosenthal",
"Stephanie",
""
],
[
"Carter",
"Elizabeth J.",
""
]
] |
2101.10831 | Subhasish Goswami | Mriganka Nath and Subhasish Goswami | Toxicity Detection in Drug Candidates using Simplified Molecular-Input
Line-Entry System | 4 Pages, 4 Figures, Published with International Journal of Computer
Applications (IJCA) | International Journal of Computer Applications 175(21):1-4,
September 2020 | 10.5120/ijca2020920695 | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The need for analysis of toxicity in new drug candidates and the requirement
of doing it fast have asked the consideration of scientists towards the use of
artificial intelligence tools to examine toxicity levels and to develop models
to a degree where they can be used commercially to measure toxicity levels
efficiently in upcoming drugs. Artificial Intelligence based models can be used
to predict the toxic nature of a chemical using Quantitative Structure Activity
Relationship techniques. Convolutional Neural Network models have demonstrated
great outcomes in predicting the qualitative analysis of chemicals in order to
determine the toxicity. This paper goes for the study of Simplified Molecular
Input Line-Entry System (SMILES) as a parameter to develop Long short term
memory (LSTM) based models in order to examine the toxicity of a molecule and
the degree to which the need can be fulfilled for practical use alongside its
future outlooks for the purpose of real world applications.
| [
{
"created": "Thu, 21 Jan 2021 07:02:21 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Nath",
"Mriganka",
""
],
[
"Goswami",
"Subhasish",
""
]
] |
2101.10857 | Vinayak Elangovan | Vinayak Elangovan | Indoor Group Activity Recognition using Multi-Layered HMMs | 8 pages, 7 figures, 3 tables | Proceedings of Academics World International Conference,
Philadelphia, USA, 28th - 29th December, 2019 | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Discovery and recognition of Group Activities (GA) based on imagery data
processing have significant applications in persistent surveillance systems,
which play an important role in some Internet services. The process is involved
with analysis of sequential imagery data with spatiotemporal associations.
Discretion of video imagery requires a proper inference system capable of
discriminating and differentiating cohesive observations and interlinking them
to known ontologies. We propose an Ontology based GAR with a proper inference
model that is capable of identifying and classifying a sequence of events in
group activities. A multi-layered Hidden Markov Model (HMM) is proposed to
recognize different levels of abstract GA. The multi-layered HMM consists of N
layers of HMMs where each layer comprises of M number of HMMs running in
parallel. The number of layers depends on the order of information to be
extracted. At each layer, by matching and correlating attributes of detected
group events, the model attempts to associate sensory observations to known
ontology perceptions. This paper demonstrates and compares performance of three
different implementation of HMM, namely, concatenated N-HMM, cascaded C-HMM and
hybrid H-HMM for building effective multi-layered HMM.
| [
{
"created": "Sat, 23 Jan 2021 22:02:12 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Elangovan",
"Vinayak",
""
]
] |
2101.10861 | Lucas Prado Osco | Lucas Prado Osco, Jos\'e Marcato Junior, Ana Paula Marques Ramos,
L\'ucio Andr\'e de Castro Jorge, Sarah Narges Fatholahi, Jonathan de Andrade
Silva, Edson Takashi Matsubara, Hemerson Pistori, Wesley Nunes Gon\c{c}alves,
Jonathan Li | A Review on Deep Learning in UAV Remote Sensing | 27 pages, 10 figures | International Journal of Applied Earth Observation and
Geoinformation, 2022 | 10.1016/j.jag.2021.102456 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep Neural Networks (DNNs) learn representation from data with an impressive
capability, and brought important breakthroughs for processing images,
time-series, natural language, audio, video, and many others. In the remote
sensing field, surveys and literature revisions specifically involving DNNs
algorithms' applications have been conducted in an attempt to summarize the
amount of information produced in its subfields. Recently, Unmanned Aerial
Vehicles (UAV) based applications have dominated aerial sensing research.
However, a literature revision that combines both "deep learning" and "UAV
remote sensing" thematics has not yet been conducted. The motivation for our
work was to present a comprehensive review of the fundamentals of Deep Learning
(DL) applied in UAV-based imagery. We focused mainly on describing
classification and regression techniques used in recent applications with
UAV-acquired data. For that, a total of 232 papers published in international
scientific journal databases was examined. We gathered the published material
and evaluated their characteristics regarding application, sensor, and
technique used. We relate how DL presents promising results and has the
potential for processing tasks associated with UAV-based image data. Lastly, we
project future perspectives, commentating on prominent DL paths to be explored
in the UAV remote sensing field. Our revision consists of a friendly-approach
to introduce, commentate, and summarize the state-of-the-art in UAV-based image
applications with DNNs algorithms in diverse subfields of remote sensing,
grouping it in the environmental, urban, and agricultural contexts.
| [
{
"created": "Fri, 22 Jan 2021 16:08:38 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jan 2021 14:09:43 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Apr 2023 19:02:35 GMT",
"version": "v3"
},
{
"created": "Sun, 20 Aug 2023 19:43:18 GMT",
"version": "v4"
}
] | 2023-08-22 | [
[
"Osco",
"Lucas Prado",
""
],
[
"Junior",
"José Marcato",
""
],
[
"Ramos",
"Ana Paula Marques",
""
],
[
"Jorge",
"Lúcio André de Castro",
""
],
[
"Fatholahi",
"Sarah Narges",
""
],
[
"Silva",
"Jonathan de Andrade",
""
],
[
"Matsubara",
"Edson Takashi",
""
],
[
"Pistori",
"Hemerson",
""
],
[
"Gonçalves",
"Wesley Nunes",
""
],
[
"Li",
"Jonathan",
""
]
] |
2101.10913 | Min Yan | Min Yan, Guoshan Zhang, Tong Zhang, Yueming Zhang | Nondiscriminatory Treatment: a straightforward framework for multi-human
parsing | null | Neurocomputing, 2021, 460: 126-138 | 10.1016/j.neucom.2021.07.023 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-human parsing aims to segment every body part of every human instance.
Nearly all state-of-the-art methods follow the "detection first" or
"segmentation first" pipelines. Different from them, we present an end-to-end
and box-free pipeline from a new and more human-intuitive perspective. In
training time, we directly do instance segmentation on humans and parts. More
specifically, we introduce a notion of "indiscriminate objects with categorie"
which treats humans and parts without distinction and regards them both as
instances with categories. In the mask prediction, each binary mask is obtained
by a combination of prototypes shared among all human and part categories. In
inference time, we design a brand-new grouping post-processing method that
relates each part instance with one single human instance and groups them
together to obtain the final human-level parsing result. We name our method as
Nondiscriminatory Treatment between Humans and Parts for Human Parsing (NTHP).
Experiments show that our network performs superiorly against state-of-the-art
methods by a large margin on the MHP v2.0 and PASCAL-Person-Part datasets.
| [
{
"created": "Tue, 26 Jan 2021 16:31:21 GMT",
"version": "v1"
}
] | 2022-01-05 | [
[
"Yan",
"Min",
""
],
[
"Zhang",
"Guoshan",
""
],
[
"Zhang",
"Tong",
""
],
[
"Zhang",
"Yueming",
""
]
] |
2101.10927 | Artur Kulmizev | Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders S{\o}gaard,
Joakim Nivre | Attention Can Reflect Syntactic Structure (If You Let It) | null | EACL 2021 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Since the popularization of the Transformer as a general-purpose feature
encoder for NLP, many studies have attempted to decode linguistic structure
from its novel multi-head attention mechanism. However, much of such work
focused almost exclusively on English -- a language with rigid word order and a
lack of inflectional morphology. In this study, we present decoding experiments
for multilingual BERT across 18 languages in order to test the generalizability
of the claim that dependency syntax is reflected in attention patterns. We show
that full trees can be decoded above baseline accuracy from single attention
heads, and that individual relations are often tracked by the same heads across
languages. Furthermore, in an attempt to address recent debates about the
status of attention as an explanatory mechanism, we experiment with fine-tuning
mBERT on a supervised parsing objective while freezing different series of
parameters. Interestingly, in steering the objective to learn explicit
linguistic structure, we find much of the same structure represented in the
resulting attention patterns, with interesting differences with respect to
which parameters are frozen.
| [
{
"created": "Tue, 26 Jan 2021 16:49:16 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Ravishankar",
"Vinit",
""
],
[
"Kulmizev",
"Artur",
""
],
[
"Abdou",
"Mostafa",
""
],
[
"Søgaard",
"Anders",
""
],
[
"Nivre",
"Joakim",
""
]
] |
2101.10946 | Yakup Kutlu | Gokhan Altan, Yakup Kutlu, Yusuf Garbi, Adnan Ozhan Pekmezci, Serkan
Nural | Multimedia Respiratory Database (RespiratoryDatabase@TR): Auscultation
Sounds and Chest X-rays | 14 pages, 7 figures, Natural and Engineering Sciences | Natural and Engineering Sciences, 2017 | null | null | physics.med-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | Auscultation is a method for diagnosis of especially internal medicine
diseases such as cardiac, pulmonary and cardio-pulmonary by listening the
internal sounds from the body parts. It is the simplest and the most common
physical examination in the assessment processes of the clinical skills. In
this study, the lung and heart sounds are recorded synchronously from left and
right sides of posterior and anterior chest wall and back using two digital
stethoscopes in Antakya State Hospital. The chest X-rays and the pulmonary
function test variables and spirometric curves, the St. George respiratory
questionnaire (SGRQ-C) are collected as multimedia and clinical functional
analysis variables of the patients. The 4 channels of heart sounds are focused
on aortic, pulmonary, tricuspid and mitral areas. The 12 channels of lung
sounds are focused on upper lung, middle lung, lower lung and costophrenic
angle areas of posterior and anterior sides of the chest. The recordings are
validated and labelled by two pulmonologists evaluating the collected chest
x-ray, PFT and auscultation sounds of the subjects. The database consists of 30
healthy subjects and 45 subjects with pulmonary diseases such as asthma,
chronic obstructive pulmonary disease, bronchitis. The novelties of the
database are the combination ability between auscultation sound results, chest
X-ray and PFT; synchronously assessment capability of the lungs sounds; image
processing based computerized analysis of the respiratory using chest X-ray and
providing opportunity for improving analysis of both lung sounds and heart
sounds on pulmonary and cardiac diseases.
| [
{
"created": "Thu, 21 Jan 2021 08:08:11 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Altan",
"Gokhan",
""
],
[
"Kutlu",
"Yakup",
""
],
[
"Garbi",
"Yusuf",
""
],
[
"Pekmezci",
"Adnan Ozhan",
""
],
[
"Nural",
"Serkan",
""
]
] |