id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.00146 | Leibo Liu | Leibo Liu, Oscar Perez-Concha, Anthony Nguyen, Vicki Bennett, Louisa
Jorm | De-identifying Australian Hospital Discharge Summaries: An End-to-End
Framework using Ensemble of Deep Learning Models | null | Journal of Biomedical Informatics 135 (2022) 104215 | 10.1016/j.jbi.2022.104215 | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic Medical Records (EMRs) contain clinical narrative text that is of
great potential value to medical researchers. However, this information is
mixed with Personally Identifiable Information (PII) that presents risks to
patient and clinician confidentiality. This paper presents an end-to-end
deidentification framework to automatically remove PII from Australian hospital
discharge summaries. Our corpus included 600 hospital discharge summaries which
were extracted from the EMRs of two principal referral hospitals in Sydney,
Australia. Our end-to-end de-identification framework consists of three
components: 1) Annotation: labelling of PII in the 600 hospital discharge
summaries using five pre-defined categories: person, address, date of birth,
individual identification number, phone/fax number; 2) Modelling: training six
named entity recognition (NER) deep learning base-models on balanced and
imbalanced datasets; and evaluating ensembles that combine all six base-models,
the three base-models with the best F1 scores and the three base-models with
the best recall scores respectively, using token-level majority voting and
stacking methods; and 3) De-identification: removing PII from the hospital
discharge summaries. Our results showed that the ensemble model combined using
the stacking Support Vector Machine (SVM) method on the three base-models with
the best F1 scores achieved excellent results with a F1 score of 99.16% on the
test set of our corpus. We also evaluated the robustness of our modelling
component on the 2014 i2b2 de-identification dataset. Our ensemble model, which
uses the token-level majority voting method on all six basemodels, achieved the
highest F1 score of 96.24% at strict entity matching and the highest F1 score
of 98.64% at binary token-level matching compared to two state-of-the-art
methods.
| [
{
"created": "Fri, 1 Jan 2021 03:09:31 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Aug 2021 06:04:21 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Dec 2021 14:12:01 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Oct 2022 00:46:47 GMT",
"version": "v4"
}
] | 2022-10-05 | [
[
"Liu",
"Leibo",
""
],
[
"Perez-Concha",
"Oscar",
""
],
[
"Nguyen",
"Anthony",
""
],
[
"Bennett",
"Vicki",
""
],
[
"Jorm",
"Louisa",
""
]
] |
2101.00151 | Hung Le | Hung Le and Chinnadhurai Sankar and Seungwhan Moon and Ahmad Beirami
and Alborz Geramifard and Satwik Kottur | DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded
Dialogue | 20 pages, 14 figures, 8 tables | Association for Computational Linguistics (2021) | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A video-grounded dialogue system is required to understand both dialogue,
which contains semantic dependencies from turn to turn, and video, which
contains visual cues of spatial and temporal scene variations. Building such
dialogue systems is a challenging problem, involving various reasoning types on
both visual and language inputs. Existing benchmarks do not have enough
annotations to thoroughly analyze dialogue systems and understand their
capabilities and limitations in isolation. These benchmarks are also not
explicitly designed to minimise biases that models can exploit without actual
reasoning. To address these limitations, in this paper, we present DVD, a
Diagnostic Dataset for Video-grounded Dialogues. The dataset is designed to
contain minimal biases and has detailed annotations for the different types of
reasoning over the spatio-temporal space of video. Dialogues are synthesized
over multiple question turns, each of which is injected with a set of
cross-turn semantic relationships. We use DVD to analyze existing approaches,
providing interesting insights into their abilities and limitations. In total,
DVD is built from $11k$ CATER synthetic videos and contains $10$ instances of
$10$-round dialogues for each video, resulting in more than $100k$ dialogues
and $1M$ question-answer pairs. Our code and dataset are publicly available at
https://github.com/facebookresearch/DVDialogues.
| [
{
"created": "Fri, 1 Jan 2021 03:20:22 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Jun 2021 15:55:57 GMT",
"version": "v2"
}
] | 2021-06-15 | [
[
"Le",
"Hung",
""
],
[
"Sankar",
"Chinnadhurai",
""
],
[
"Moon",
"Seungwhan",
""
],
[
"Beirami",
"Ahmad",
""
],
[
"Geramifard",
"Alborz",
""
],
[
"Kottur",
"Satwik",
""
]
] |
2101.00153 | Bin Liu | Liu Bin, Yin Guosheng | Graphmax for Text Generation | null | Journal of Artificial Intelligence Research, vol. 78, pp.823-848,
Nov. 2023 | 10.1613/jair.1.15280 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In text generation, a large language model (LM) makes a choice of each new
word based only on the former selection of its context using the softmax
function. Nevertheless, the link statistics information of concurrent words
based on a scene-specific corpus is valuable in choosing the next word, which
can help to ensure the topic of the generated text to be aligned with the
current task. To fully explore the co-occurrence information,we propose a
graphmax function for task-specific text generation. Using the graph-based
regularization, graphmax enables the final word choice to be determined by both
the global knowledge from the LM and the local knowledge from the
scene-specific corpus. The traditional softmax function is regularized with a
graph total variation (GTV) term, which incorporates the local knowledge into
the LM and encourages the model to consider the statistical relationships
between words in a scene-specific corpus. The proposed graphmax is versatile
and can be readily plugged into any large pre-trained LM for text generation
and machine translation. Through extensive experiments, we demonstrate that the
new GTV-based regularization can improve performances in various natural
language processing tasks in comparison with existing methods. Moreover,
through human experiments, we observe that participants can easily distinguish
the text generated by graphmax or softmax.
| [
{
"created": "Fri, 1 Jan 2021 03:29:21 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Oct 2023 08:01:47 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Dec 2023 12:57:23 GMT",
"version": "v3"
}
] | 2023-12-20 | [
[
"Bin",
"Liu",
""
],
[
"Guosheng",
"Yin",
""
]
] |
2101.00173 | Kai Yi | Mohamed Elhoseiny, Kai Yi, Mohamed Elfeki | CIZSL++: Creativity Inspired Generative Zero-Shot Learning | This paper is an extended version of a paper published on the
International Conference on Computer Vision (ICCV), held in Seoul, Republic
of Korea, October 27-Nov 2nd, 2019 CIZSL-v2 code is available here
https://github.com/Vision-CAIR/CIZSLv2. arXiv admin note: substantial text
overlap with arXiv:1904.01109 | https://openaccess.thecvf.com/content_ICCV_2019/papers/Elhoseiny_Creativity_Inspired_Zero-Shot_Learning_ICCV_2019_paper.pdf | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Zero-shot learning (ZSL) aims at understanding unseen categories with no
training examples from class-level descriptions. To improve the discriminative
power of ZSL, we model the visual learning process of unseen categories with
inspiration from the psychology of human creativity for producing novel art.
First, we propose CIZSL-v1 as a creativity inspired model for generative ZSL.
We relate ZSL to human creativity by observing that ZSL is about recognizing
the unseen, and creativity is about creating a likable unseen. We introduce a
learning signal inspired by creativity literature that explores the unseen
space with hallucinated class-descriptions and encourages careful deviation of
their visual feature generations from seen classes while allowing knowledge
transfer from seen to unseen classes. Second, CIZSL-v2 is proposed as an
improved version of CIZSL-v1 for generative zero-shot learning. CIZSL-v2
consists of an investigation of additional inductive losses for unseen classes
along with a semantic guided discriminator. Empirically, we show consistently
that CIZSL losses can improve generative ZSL models on the challenging task of
generalized ZSL from a noisy text on CUB and NABirds datasets. We also show the
advantage of our approach to Attribute-based ZSL on AwA2, aPY, and SUN
datasets. We also show that CIZSL-v2 has improved performance compared to
CIZSL-v1.
| [
{
"created": "Fri, 1 Jan 2021 05:47:57 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 09:08:51 GMT",
"version": "v2"
}
] | 2021-02-18 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Yi",
"Kai",
""
],
[
"Elfeki",
"Mohamed",
""
]
] |
2101.00336 | Hanxun Huang | Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey | Neural Architecture Search via Combinatorial Multi-Armed Bandit | 10 pages, 7 figures | International Joint Conference on Neural Networks (IJCNN) 2021 | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neural Architecture Search (NAS) has gained significant popularity as an
effective tool for designing high performance deep neural networks (DNNs). NAS
can be performed via policy gradient, evolutionary algorithms, differentiable
architecture search or tree-search methods. While significant progress has been
made for both policy gradient and differentiable architecture search,
tree-search methods have so far failed to achieve comparable accuracy or search
efficiency. In this paper, we formulate NAS as a Combinatorial Multi-Armed
Bandit (CMAB) problem (CMAB-NAS). This allows the decomposition of a large
search space into smaller blocks where tree-search methods can be applied more
effectively and efficiently. We further leverage a tree-based method called
Nested Monte-Carlo Search to tackle the CMAB-NAS problem. On CIFAR-10, our
approach discovers a cell structure that achieves a low error rate that is
comparable to the state-of-the-art, using only 0.58 GPU days, which is 20 times
faster than current tree-search methods. Moreover, the discovered structure
transfers well to large-scale datasets such as ImageNet.
| [
{
"created": "Fri, 1 Jan 2021 23:29:33 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Apr 2021 14:13:15 GMT",
"version": "v2"
}
] | 2021-04-27 | [
[
"Huang",
"Hanxun",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Erfani",
"Sarah M.",
""
],
[
"Bailey",
"James",
""
]
] |
2101.00360 | Pingyi Fan Prof. | Pingyi Fan | New-Type Hoeffding's Inequalities and Application in Tail Bounds | 8 pages, 1 figure | Open Journal of Mathematical Sciences Vol.5 No.1 pp.248 -261, 2021 | 10.30538/oms2021.0161 | ISSN: 2523-0212 (Online) 2616-4906 (Print) | math.ST cs.AI cs.IT math.IT math.PR stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well known that Hoeffding's inequality has a lot of applications in the
signal and information processing fields. How to improve Hoeffding's inequality
and find the refinements of its applications have always attracted much
attentions. An improvement of Hoeffding inequality was recently given by Hertz
\cite{r1}. Eventhough such an improvement is not so big, it still can be used
to update many known results with original Hoeffding's inequality, especially
for Hoeffding-Azuma inequality for martingales. However, the results in
original Hoeffding's inequality and its refinement one by Hertz only considered
the first order moment of random variables. In this paper, we present a new
type of Hoeffding's inequalities, where the high order moments of random
variables are taken into account. It can get some considerable improvements in
the tail bounds evaluation compared with the known results. It is expected that
the developed new type Hoeffding's inequalities could get more interesting
applications in some related fields that use Hoeffding's results.
| [
{
"created": "Sat, 2 Jan 2021 03:19:11 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Fan",
"Pingyi",
""
]
] |
2101.00388 | Houjin Yu | Houjin Yu, Xian-Ling Mao, Zewen Chi, Wei Wei and Heyan Huang | A Robust and Domain-Adaptive Approach for Low-Resource Named Entity
Recognition | Best Student Paper of 2020 IEEE International Conference on Knowledge
Graph (ICKG) | 2020 IEEE International Conference on Knowledge Graph (ICKG) (pp.
297-304)- | 10.1109/ICBK50248.2020.00050 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it has attracted much attention to build reliable named entity
recognition (NER) systems using limited annotated data. Nearly all existing
works heavily rely on domain-specific resources, such as external lexicons and
knowledge bases. However, such domain-specific resources are often not
available, meanwhile it's difficult and expensive to construct the resources,
which has become a key obstacle to wider adoption. To tackle the problem, in
this work, we propose a novel robust and domain-adaptive approach RDANER for
low-resource NER, which only uses cheap and easily obtainable resources.
Extensive experiments on three benchmark datasets demonstrate that our approach
achieves the best performance when only using cheap and easily obtainable
resources, and delivers competitive results against state-of-the-art methods
which use difficultly obtainable domainspecific resources. All our code and
corpora can be found on https://github.com/houking-can/RDANER.
| [
{
"created": "Sat, 2 Jan 2021 06:47:01 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Yu",
"Houjin",
""
],
[
"Mao",
"Xian-Ling",
""
],
[
"Chi",
"Zewen",
""
],
[
"Wei",
"Wei",
""
],
[
"Huang",
"Heyan",
""
]
] |
2101.00395 | Masahiro Toyoura | Siqiang Chen, Masahiro Toyoura, Takamasa Terada, Xiaoyang Mao, Gang Xu | Image-based Textile Decoding | null | Integrated Computer-Aided Engineering, Pre-press, pp. 1-14, 2020 | 10.3233/ICA-200647 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | A textile fabric consists of countless parallel vertical yarns (warps) and
horizontal yarns (wefts). While common looms can weave repetitive patterns,
Jacquard looms can weave the patterns without repetition restrictions. A
pattern in which the warps and wefts cross on a grid is defined in a binary
matrix. The binary matrix can define which warp and weft is on top at each grid
point of the Jacquard fabric. The process can be regarded as encoding from
pattern to textile. In this work, we propose a decoding method that generates a
binary pattern from a textile fabric that has been already woven. We could not
use a deep neural network to learn the process based solely on the training set
of patterns and observed fabric images. The crossing points in the observed
image were not completely located on the grid points, so it was difficult to
take a direct correspondence between the fabric images and the pattern
represented by the matrix in the framework of deep learning. Therefore, we
propose a method that can apply the framework of deep learning via the
intermediate representation of patterns and images. We show how to convert a
pattern into an intermediate representation and how to reconvert the output
into a pattern and confirm its effectiveness. In this experiment, we confirmed
that 93% of correct pattern was obtained by decoding the pattern from the
actual fabric images and weaving them again.
| [
{
"created": "Sat, 2 Jan 2021 07:41:34 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Chen",
"Siqiang",
""
],
[
"Toyoura",
"Masahiro",
""
],
[
"Terada",
"Takamasa",
""
],
[
"Mao",
"Xiaoyang",
""
],
[
"Xu",
"Gang",
""
]
] |
2101.00407 | Liyuan Wang | Liyuan Wang, Kuo Yang, Chongxuan Li, Lanqing Hong, Zhenguo Li, Jun Zhu | ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning | null | CVPR 2021 | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning usually assumes the incoming data are fully labeled, which
might not be applicable in real applications. In this work, we consider
semi-supervised continual learning (SSCL) that incrementally learns from
partially labeled data. Observing that existing continual learning methods lack
the ability to continually exploit the unlabeled data, we propose deep Online
Replay with Discriminator Consistency (ORDisCo) to interdependently learn a
classifier with a conditional generative adversarial network (GAN), which
continually passes the learned data distribution to the classifier. In
particular, ORDisCo replays data sampled from the conditional generator to the
classifier in an online manner, exploiting unlabeled data in a time- and
storage-efficient way. Further, to explicitly overcome the catastrophic
forgetting of unlabeled data, we selectively stabilize parameters of the
discriminator that are important for discriminating the pairs of old unlabeled
data and their pseudo-labels predicted by the classifier. We extensively
evaluate ORDisCo on various semi-supervised learning benchmark datasets for
SSCL, and show that ORDisCo achieves significant performance improvement on
SVHN, CIFAR10 and Tiny-ImageNet, compared to strong baselines.
| [
{
"created": "Sat, 2 Jan 2021 09:04:14 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Apr 2021 01:57:03 GMT",
"version": "v2"
}
] | 2022-02-15 | [
[
"Wang",
"Liyuan",
""
],
[
"Yang",
"Kuo",
""
],
[
"Li",
"Chongxuan",
""
],
[
"Hong",
"Lanqing",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Zhu",
"Jun",
""
]
] |
2101.00433 | Michael Saxon | Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang
Wang | Modeling Disclosive Transparency in NLP Application Descriptions | To appear at EMNLP 2021. 15 pages, 10 figures, 7 tables | Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing, pp 2023-2037 | 10.18653/v1/2021.emnlp-main.153 | null | cs.CL cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Broader disclosive transparency$-$truth and clarity in communication
regarding the function of AI systems$-$is widely considered desirable.
Unfortunately, it is a nebulous concept, difficult to both define and quantify.
This is problematic, as previous work has demonstrated possible trade-offs and
negative consequences to disclosive transparency, such as a confusion effect,
where "too much information" clouds a reader's understanding of what a system
description means. Disclosive transparency's subjective nature has rendered
deep study into these problems and their remedies difficult. To improve this
state of affairs, We introduce neural language model-based probabilistic
metrics to directly model disclosive transparency, and demonstrate that they
correlate with user and expert opinions of system transparency, making them a
valid objective proxy. Finally, we demonstrate the use of these metrics in a
pilot study quantifying the relationships between transparency, confusion, and
user perceptions in a corpus of real NLP system descriptions.
| [
{
"created": "Sat, 2 Jan 2021 11:46:17 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Apr 2021 03:42:18 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Aug 2021 03:30:20 GMT",
"version": "v3"
},
{
"created": "Fri, 10 Sep 2021 17:54:54 GMT",
"version": "v4"
}
] | 2022-05-26 | [
[
"Saxon",
"Michael",
""
],
[
"Levy",
"Sharon",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Albalak",
"Alon",
""
],
[
"Wang",
"William Yang",
""
]
] |
2101.00441 | Jakub Marecek | Sam D. Allen and Edmund K.Burke and Jakub Marecek | A space-indexed formulation of packing boxes into a larger box | arXiv admin note: substantial text overlap with arXiv:1412.2526 | Operations Research Letters, Volume 40, Issue 1, January 2012,
Pages 20-24 | 10.1016/j.orl.2011.10.008 | null | math.OC cs.AI cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current integer programming solvers fail to decide whether 12 unit cubes can
be packed into a 1x1x11 box within an hour using the natural relaxation of
Chen/Padberg. We present an alternative relaxation of the problem of packing
boxes into a larger box, which makes it possible to solve much larger
instances.
| [
{
"created": "Sat, 2 Jan 2021 12:10:47 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Allen",
"Sam D.",
""
],
[
"Burke",
"Edmund K.",
""
],
[
"Marecek",
"Jakub",
""
]
] |
2101.00443 | Sourav Garg | Sourav Garg, Niko S\"underhauf, Feras Dayoub, Douglas Morrison,
Akansel Cosgun, Gustavo Carneiro, Qi Wu, Tat-Jun Chin, Ian Reid, Stephen
Gould, Peter Corke, Michael Milford | Semantics for Robotic Mapping, Perception and Interaction: A Survey | 81 pages, 1 figure, published in Foundations and Trends in Robotics,
2020 | Foundations and Trends in Robotics: Vol. 8: No. 1-2, pp 1-224
(2020) | 10.1561/2300000059 | null | cs.RO cs.CV cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robots to navigate and interact more richly with the world around them,
they will likely require a deeper understanding of the world in which they
operate. In robotics and related research fields, the study of understanding is
often referred to as semantics, which dictates what does the world "mean" to a
robot, and is strongly tied to the question of how to represent that meaning.
With humans and robots increasingly operating in the same world, the prospects
of human-robot interaction also bring semantics and ontology of natural
language into the picture. Driven by need, as well as by enablers like
increasing availability of training data and computational resources, semantics
is a rapidly growing research area in robotics. The field has received
significant attention in the research literature to date, but most reviews and
surveys have focused on particular aspects of the topic: the technical research
issues regarding its use in specific robotic topics like mapping or
segmentation, or its relevance to one particular application domain like
autonomous driving. A new treatment is therefore required, and is also timely
because so much relevant research has occurred since many of the key surveys
were published. This survey therefore provides an overarching snapshot of where
semantics in robotics stands today. We establish a taxonomy for semantics
research in or relevant to robotics, split into four broad categories of
activity, in which semantics are extracted, used, or both. Within these broad
categories we survey dozens of major topics including fundamentals from the
computer vision field and key robotics research areas utilizing semantics,
including mapping, navigation and interaction with the world. The survey also
covers key practical considerations, including enablers like increased data
availability and improved computational hardware, and major application areas
where...
| [
{
"created": "Sat, 2 Jan 2021 12:34:39 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Garg",
"Sourav",
""
],
[
"Sünderhauf",
"Niko",
""
],
[
"Dayoub",
"Feras",
""
],
[
"Morrison",
"Douglas",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Wu",
"Qi",
""
],
[
"Chin",
"Tat-Jun",
""
],
[
"Reid",
"Ian",
""
],
[
"Gould",
"Stephen",
""
],
[
"Corke",
"Peter",
""
],
[
"Milford",
"Michael",
""
]
] |
2101.00529 | Pengchuan Zhang | Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang,
Lijuan Wang, Yejin Choi, Jianfeng Gao | VinVL: Revisiting Visual Representations in Vision-Language Models | null | CVPR 2021 | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a detailed study of improving visual representations for
vision language (VL) tasks and develops an improved object detection model to
provide object-centric representations of images. Compared to the most widely
used \emph{bottom-up and top-down} model \cite{anderson2018bottom}, the new
model is bigger, better-designed for VL tasks, and pre-trained on much larger
training corpora that combine multiple public annotated object detection
datasets. Therefore, it can generate representations of a richer collection of
visual objects and concepts. While previous VL research focuses mainly on
improving the vision-language fusion model and leaves the object detection
model improvement untouched, we show that visual features matter significantly
in VL models. In our experiments we feed the visual features generated by the
new object detection model into a Transformer-based VL fusion model \oscar
\cite{li2020oscar}, and utilize an improved approach \short\ to pre-train the
VL model and fine-tune it on a wide range of downstream VL tasks. Our results
show that the new visual features significantly improve the performance across
all VL tasks, creating new state-of-the-art results on seven public benchmarks.
We will release the new object detection model to public.
| [
{
"created": "Sat, 2 Jan 2021 23:35:27 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Mar 2021 01:27:16 GMT",
"version": "v2"
}
] | 2021-03-11 | [
[
"Zhang",
"Pengchuan",
""
],
[
"Li",
"Xiujun",
""
],
[
"Hu",
"Xiaowei",
""
],
[
"Yang",
"Jianwei",
""
],
[
"Zhang",
"Lei",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Choi",
"Yejin",
""
],
[
"Gao",
"Jianfeng",
""
]
] |
2101.00561 | Tianxiao Zhang | Tianxiao Zhang, Wenchi Ma, Guanghui Wang | Six-channel Image Representation for Cross-domain Object Detection | null | 2021 11th International Conference on Image and Graphics (ICIG) | 10.1007/978-3-030-87355-4_15 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most deep learning models are data-driven and the excellent performance is
highly dependent on the abundant and diverse datasets. However, it is very hard
to obtain and label the datasets of some specific scenes or applications. If we
train the detector using the data from one domain, it cannot perform well on
the data from another domain due to domain shift, which is one of the big
challenges of most object detection models. To address this issue, some
image-to-image translation techniques have been employed to generate some fake
data of some specific scenes to train the models. With the advent of Generative
Adversarial Networks (GANs), we could realize unsupervised image-to-image
translation in both directions from a source to a target domain and from the
target to the source domain. In this study, we report a new approach to making
use of the generated images. We propose to concatenate the original 3-channel
images and their corresponding GAN-generated fake images to form 6-channel
representations of the dataset, hoping to address the domain shift problem
while exploiting the success of available detection models. The idea of
augmented data representation may inspire further study on object detection and
other applications.
| [
{
"created": "Sun, 3 Jan 2021 04:50:03 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 21:03:25 GMT",
"version": "v2"
}
] | 2022-03-09 | [
[
"Zhang",
"Tianxiao",
""
],
[
"Ma",
"Wenchi",
""
],
[
"Wang",
"Guanghui",
""
]
] |
2101.00603 | Haotian Li | Zhuqing Jiang, Haotian Li, Liangjie Liu, Aidong Men, Haiying Wang | A Switched View of Retinex: Deep Self-Regularized Low-Light Image
Enhancement | null | Neurocomputing 454 (2021): 361-372 | 10.1016/j.neucom.2021.05.025 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-regularized low-light image enhancement does not require any
normal-light image in training, thereby freeing from the chains on paired or
unpaired low-/normal-images. However, existing methods suffer color deviation
and fail to generalize to various lighting conditions. This paper presents a
novel self-regularized method based on Retinex, which, inspired by HSV,
preserves all colors (Hue, Saturation) and only integrates Retinex theory into
brightness (Value). We build a reflectance estimation network by restricting
the consistency of reflectances embedded in both the original and a novel
random disturbed form of the brightness of the same scene. The generated
reflectance, which is assumed to be irrelevant of illumination by Retinex, is
treated as enhanced brightness. Our method is efficient as a low-light image is
decoupled into two subspaces, color and brightness, for better preservation and
enhancement. Extensive experiments demonstrate that our method outperforms
multiple state-of-the-art algorithms qualitatively and quantitatively and
adapts to more lighting conditions.
| [
{
"created": "Sun, 3 Jan 2021 10:40:31 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Jiang",
"Zhuqing",
""
],
[
"Li",
"Haotian",
""
],
[
"Liu",
"Liangjie",
""
],
[
"Men",
"Aidong",
""
],
[
"Wang",
"Haiying",
""
]
] |
2101.00667 | Idoia Ruiz | Idoia Ruiz, Lorenzo Porzi, Samuel Rota Bul\`o, Peter Kontschieder,
Joan Serrat | Weakly Supervised Multi-Object Tracking and Segmentation | Accepted at Autonomous Vehicle Vision WACV 2021 Workshop | Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the problem of weakly supervised Multi-Object Tracking and
Segmentation, i.e. joint weakly supervised instance segmentation and
multi-object tracking, in which we do not provide any kind of mask annotation.
To address it, we design a novel synergistic training strategy by taking
advantage of multi-task learning, i.e. classification and tracking tasks guide
the training of the unsupervised instance segmentation. For that purpose, we
extract weak foreground localization information, provided by Grad-CAM
heatmaps, to generate a partial ground truth to learn from. Additionally, RGB
image level information is employed to refine the mask prediction at the edges
of the objects. We evaluate our method on KITTI MOTS, the most representative
benchmark for this task, reducing the performance gap on the MOTSP metric
between the fully supervised and weakly supervised approach to just 12% and
12.7% for cars and pedestrians, respectively.
| [
{
"created": "Sun, 3 Jan 2021 17:06:43 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Ruiz",
"Idoia",
""
],
[
"Porzi",
"Lorenzo",
""
],
[
"Bulò",
"Samuel Rota",
""
],
[
"Kontschieder",
"Peter",
""
],
[
"Serrat",
"Joan",
""
]
] |
2101.00703 | Samit Chakraborty | Samit Chakraborty, Marguerite Moore, Lisa Parrillo-Chapman | Automatic Defect Detection of Print Fabric Using Convolutional Neural
Network | 8 pages, 4 figures, Conference | Digital Fashion Innovation e-Symposium, 2020 | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Automatic defect detection is a challenging task because of the variability
in texture and type of fabric defects. An effective defect detection system
enables manufacturers to improve the quality of processes and products.
Automation across the textile manufacturing systems would reduce fabric wastage
and increase profitability by saving cost and resources. There are different
contemporary research on automatic defect detection systems using image
processing and machine learning techniques. These techniques differ from each
other based on the manufacturing processes and defect types. Researchers have
also been able to establish real-time defect detection system during weaving.
Although, there has been research on patterned fabric defect detection, these
defects are related to weaving faults such as holes, and warp and weft defects.
But, there has not been any research that is designed to detect defects that
arise during such as spot and print mismatch. This research has fulfilled this
gap by developing a print fabric database and implementing deep convolutional
neural network (CNN).
| [
{
"created": "Sun, 3 Jan 2021 20:56:56 GMT",
"version": "v1"
}
] | 2021-01-19 | [
[
"Chakraborty",
"Samit",
""
],
[
"Moore",
"Marguerite",
""
],
[
"Parrillo-Chapman",
"Lisa",
""
]
] |
2101.00784 | Zekun Wang | Zekun Wang, Pengwei Wang, Peter C. Louis, Lee E. Wheless, Yuankai Huo | WearMask: Fast In-browser Face Mask Detection with Serverless Edge
Computing for COVID-19 | null | Electronic Imaging, 2023, pp 229-1 - 229-6 | 10.2352/EI.2023.35.11.HPCI-229 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The COVID-19 epidemic has been a significant healthcare challenge in the
United States. According to the Centers for Disease Control and Prevention
(CDC), COVID-19 infection is transmitted predominately by respiratory droplets
generated when people breathe, talk, cough, or sneeze. Wearing a mask is the
primary, effective, and convenient method of blocking 80% of all respiratory
infections. Therefore, many face mask detection and monitoring systems have
been developed to provide effective supervision for hospitals, airports,
publication transportation, sports venues, and retail locations. However, the
current commercial face mask detection systems are typically bundled with
specific software or hardware, impeding public accessibility. In this paper, we
propose an in-browser serverless edge-computing based face mask detection
solution, called Web-based efficient AI recognition of masks (WearMask), which
can be deployed on any common devices (e.g., cell phones, tablets, computers)
that have internet connections using web browsers, without installing any
software. The serverless edge-computing design minimizes the extra hardware
costs (e.g., specific devices or cloud computing servers). The contribution of
the proposed method is to provide a holistic edge-computing framework of
integrating (1) deep learning models (YOLO), (2) high-performance neural
network inference computing framework (NCNN), and (3) a stack-based virtual
machine (WebAssembly). For end-users, our web-based solution has advantages of
(1) serverless edge-computing design with minimal device limitation and privacy
risk, (2) installation free deployment, (3) low computing requirements, and (4)
high detection speed. Our WearMask application has been launched with public
access at facemask-detection.com.
| [
{
"created": "Mon, 4 Jan 2021 05:50:48 GMT",
"version": "v1"
}
] | 2023-04-03 | [
[
"Wang",
"Zekun",
""
],
[
"Wang",
"Pengwei",
""
],
[
"Louis",
"Peter C.",
""
],
[
"Wheless",
"Lee E.",
""
],
[
"Huo",
"Yuankai",
""
]
] |
2101.00843 | Dennis Soemers | Cameron Browne and Dennis J. N. J. Soemers and Eric Piette | Strategic Features for General Games | Paper exactly as it appeared at KEG Workshop held at AAAI 2019 | Proceedings of the 2nd Workshop on Knowledge Extraction from Games
co-located with 33rd AAAI Conference on Artificial Intelligence (AAAI 2019) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short paper describes an ongoing research project that requires the
automated self-play learning and evaluation of a large number of board games in
digital form. We describe the approach we are taking to determine relevant
features, for biasing MCTS playouts for arbitrary games played on arbitrary
geometries. Benefits of our approach include efficient implementation, the
potential to transfer learnt knowledge to new contexts, and the potential to
explain strategic knowledge embedded in features in human-comprehensible terms.
| [
{
"created": "Mon, 4 Jan 2021 09:30:07 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Browne",
"Cameron",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Eric",
""
]
] |
2101.00910 | Shang-Hua Gao | Shang-Hua Gao, Qi Han, Zhong-Yu Li, Pai Peng, Liang Wang, Ming-Ming
Cheng | Global2Local: Efficient Structure Search for Video Action Segmentation | Accepted by CVPR 2021. Source code:
https://github.com/ShangHua-Gao/G2L-search | CVPR 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal receptive fields of models play an important role in action
segmentation. Large receptive fields facilitate the long-term relations among
video clips while small receptive fields help capture the local details.
Existing methods construct models with hand-designed receptive fields in
layers. Can we effectively search for receptive field combinations to replace
hand-designed patterns? To answer this question, we propose to find better
receptive field combinations through a global-to-local search scheme. Our
search scheme exploits both global search to find the coarse combinations and
local search to get the refined receptive field combination patterns further.
The global search finds possible coarse combinations other than human-designed
patterns. On top of the global search, we propose an expectation guided
iterative local search scheme to refine combinations effectively. Our
global-to-local search can be plugged into existing action segmentation methods
to achieve state-of-the-art performance.
| [
{
"created": "Mon, 4 Jan 2021 12:06:03 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Apr 2021 02:51:47 GMT",
"version": "v2"
}
] | 2021-05-03 | [
[
"Gao",
"Shang-Hua",
""
],
[
"Han",
"Qi",
""
],
[
"Li",
"Zhong-Yu",
""
],
[
"Peng",
"Pai",
""
],
[
"Wang",
"Liang",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] |
2101.01039 | Suzan Verberne | Ken Voskuil and Suzan Verberne | Improving reference mining in patents with BERT | 10 pages, 3 figures | Published in the 11th International Workshop on
Bibliometric-enhanced Information Retrieval (BIR 2021) | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper we address the challenge of extracting scientific references
from patents. We approach the problem as a sequence labelling task and
investigate the merits of BERT models to the extraction of these long
sequences. References in patents to scientific literature are relevant to study
the connection between science and industry. Most prior work only uses the
front-page citations for this analysis, which are provided in the metadata of
patent archives. In this paper we build on prior work using Conditional Random
Fields (CRF) and Flair for reference extraction. We improve the quality of the
training data and train three BERT-based models on the labelled data (BERT,
bioBERT, sciBERT). We find that the improved training data leads to a large
improvement in the quality of the trained models. In addition, the BERT models
beat CRF and Flair, with recall scores around 97% obtained with cross
validation. With the best model we label a large collection of 33 thousand
patents, extract the citations, and match them to publications in the Web of
Science database. We extract 50% more references than with the old training
data and methods: 735 thousand references in total. With these
patent-publication links, follow-up research will further analyze which types
of scientific work lead to inventions.
| [
{
"created": "Mon, 4 Jan 2021 15:56:21 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jan 2021 10:03:15 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Mar 2021 11:26:01 GMT",
"version": "v3"
}
] | 2021-03-11 | [
[
"Voskuil",
"Ken",
""
],
[
"Verberne",
"Suzan",
""
]
] |
2101.01213 | Ana Sofia Medeiros Oliveira | Sofia Oliveira and Daniel Loureiro and Al\'ipio Jorge | Improving Portuguese Semantic Role Labeling with Transformers and
Transfer Learning | 30 pages, 3 figures; Fixed broken links in References | 2021 IEEE 8th International Conference on Data Science and
Advanced Analytics (DSAA), 2021, pp. 1-9 | 10.1109/DSAA53316.2021.9564238 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Natural Language Processing task of determining "Who did what to whom" is
called Semantic Role Labeling. For English, recent methods based on Transformer
models have allowed for major improvements in this task over the previous state
of the art. However, for low resource languages, like Portuguese, currently
available semantic role labeling models are hindered by scarce training data.
In this paper, we explore a model architecture with only a pre-trained
Transformer-based model, a linear layer, softmax and Viterbi decoding. We
substantially improve the state-of-the-art performance in Portuguese by over 15
F1. Additionally, we improve semantic role labeling results in Portuguese
corpora by exploiting cross-lingual transfer learning using multilingual
pre-trained models, and transfer learning from dependency parsing in
Portuguese, evaluating the various proposed approaches empirically.
| [
{
"created": "Mon, 4 Jan 2021 19:56:01 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jan 2021 11:05:52 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Oct 2021 19:00:10 GMT",
"version": "v3"
}
] | 2021-11-02 | [
[
"Oliveira",
"Sofia",
""
],
[
"Loureiro",
"Daniel",
""
],
[
"Jorge",
"Alípio",
""
]
] |
2101.01214 | Eric Guzman | Eric Guzman and Joel Meyers | Reconstructing Patchy Reionization with Deep Learning | 14 pages, 9 figures. Updated to match published version. Code
available from https://github.com/EEmGuzman/resunet-cmb | Phys. Rev. D 104, 043529 (2021) | 10.1103/PhysRevD.104.043529 | null | astro-ph.CO cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The precision anticipated from next-generation cosmic microwave background
(CMB) surveys will create opportunities for characteristically new insights
into cosmology. Secondary anisotropies of the CMB will have an increased
importance in forthcoming surveys, due both to the cosmological information
they encode and the role they play in obscuring our view of the primary
fluctuations. Quadratic estimators have become the standard tools for
reconstructing the fields that distort the primary CMB and produce secondary
anisotropies. While successful for lensing reconstruction with current data,
quadratic estimators will be sub-optimal for the reconstruction of lensing and
other effects at the expected sensitivity of the upcoming CMB surveys. In this
paper we describe a convolutional neural network, ResUNet-CMB, that is capable
of the simultaneous reconstruction of two sources of secondary CMB
anisotropies, gravitational lensing and patchy reionization. We show that the
ResUNet-CMB network significantly outperforms the quadratic estimator at low
noise levels and is not subject to the lensing-induced bias on the patchy
reionization reconstruction that would be present with a straightforward
application of the quadratic estimator.
| [
{
"created": "Mon, 4 Jan 2021 19:58:28 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Aug 2021 15:40:26 GMT",
"version": "v2"
}
] | 2021-08-25 | [
[
"Guzman",
"Eric",
""
],
[
"Meyers",
"Joel",
""
]
] |
2101.01228 | Nicholas Botzer | Nicholas Botzer, Yifan Ding, Tim Weninger | Reddit Entity Linking Dataset | 20 pages and 4 figures | Information Processing and Management Volume 58, Issue 3 (May
2021) 1-20 | 10.1016/j.ipm.2020.102479 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce and make publicly available an entity linking dataset from
Reddit that contains 17,316 linked entities, each annotated by three human
annotators and then grouped into Gold, Silver, and Bronze to indicate
inter-annotator agreement. We analyze the different errors and disagreements
made by annotators and suggest three types of corrections to the raw data.
Finally, we tested existing entity linking models that are trained and tuned on
text from non-social media datasets. We find that, although these existing
entity linking models perform very well on their original datasets, they
perform poorly on this social media dataset. We also show that the majority of
these errors can be attributed to poor performance on the mention detection
subtask. These results indicate the need for better entity linking models that
can be applied to the enormous amount of social media text.
| [
{
"created": "Mon, 4 Jan 2021 20:34:04 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Feb 2021 17:54:48 GMT",
"version": "v2"
}
] | 2021-02-26 | [
[
"Botzer",
"Nicholas",
""
],
[
"Ding",
"Yifan",
""
],
[
"Weninger",
"Tim",
""
]
] |
2101.01321 | Sehoon Kim | Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer | I-BERT: Integer-only BERT Quantization | null | ICML 2021 (Oral) | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer based models, like BERT and RoBERTa, have achieved
state-of-the-art results in many Natural Language Processing tasks. However,
their memory footprint, inference latency, and power consumption are
prohibitive efficient inference at the edge, and even at the data center. While
quantization can be a viable solution for this, previous work on quantizing
Transformer based models use floating-point arithmetic during inference, which
cannot efficiently utilize integer-only logical units such as the recent Turing
Tensor Cores, or traditional integer-only ARM processors. In this work, we
propose I-BERT, a novel quantization scheme for Transformer based models that
quantizes the entire inference with integer-only arithmetic. Based on
lightweight integer-only approximation methods for nonlinear operations, e.g.,
GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end
integer-only BERT inference without any floating point calculation. We evaluate
our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that
for both cases, I-BERT achieves similar (and slightly higher) accuracy as
compared to the full-precision baseline. Furthermore, our preliminary
implementation of I-BERT shows a speedup of 2.4-4.0x for INT8 inference on a T4
GPU system as compared to FP32 inference. The framework has been developed in
PyTorch and has been open-sourced.
| [
{
"created": "Tue, 5 Jan 2021 02:42:58 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Feb 2021 09:11:11 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Jun 2021 07:53:22 GMT",
"version": "v3"
}
] | 2022-05-02 | [
[
"Kim",
"Sehoon",
""
],
[
"Gholami",
"Amir",
""
],
[
"Yao",
"Zhewei",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Keutzer",
"Kurt",
""
]
] |
2101.01597 | Nantheera Anantrasirichai | N. Anantrasirichai and David Bull | Contextual colorization and denoising for low-light ultra high
resolution sequences | 5 pages | 2021 IEEE International Conference on Image Processing (ICIP) | 10.1109/ICIP42928.2021.9506694 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Low-light image sequences generally suffer from spatio-temporal incoherent
noise, flicker and blurring of moving objects. These artefacts significantly
reduce visual quality and, in most cases, post-processing is needed in order to
generate acceptable quality. Most state-of-the-art enhancement methods based on
machine learning require ground truth data but this is not usually available
for naturally captured low light sequences. We tackle these problems with an
unpaired-learning method that offers simultaneous colorization and denoising.
Our approach is an adaptation of the CycleGAN structure. To overcome the
excessive memory limitations associated with ultra high resolution content, we
propose a multiscale patch-based framework, capturing both local and contextual
features. Additionally, an adaptive temporal smoothing technique is employed to
remove flickering artefacts. Experimental results show that our method
outperforms existing approaches in terms of subjective quality and that it is
robust to variations in brightness levels and noise.
| [
{
"created": "Tue, 5 Jan 2021 15:35:29 GMT",
"version": "v1"
}
] | 2022-03-04 | [
[
"Anantrasirichai",
"N.",
""
],
[
"Bull",
"David",
""
]
] |
2101.01665 | Rana Mostafa AbdElMohsen AbdElMolla | Reem Abdel-Salam, Rana Mostafa and Mayada Hadhood | Human Activity Recognition using Wearable Sensors: Review, Challenges,
Evaluation Benchmark | Accepted at 2ND International Workshop on Deep Learning for Human
Activity Recognition, Held in conjunction with IJCAI-PRICAI 2020, January
2021, Japan and published at Springer Communications in Computer and
Information Science (CCIS) proceedings | CCIS. 1370(2021) 1-15 | 10.1007/978-981-16-0575-8_1 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recognizing human activity plays a significant role in the advancements of
human-interaction applications in healthcare, personal fitness, and smart
devices. Many papers presented various techniques for human activity
representation that resulted in distinguishable progress. In this study, we
conduct an extensive literature review on recent, top-performing techniques in
human activity recognition based on wearable sensors. Due to the lack of
standardized evaluation and to assess and ensure a fair comparison between the
state-of-the-art techniques, we applied a standardized evaluation benchmark on
the state-of-the-art techniques using six publicly available data-sets:
MHealth, USCHAD, UTD-MHAD, WISDM, WHARF, and OPPORTUNITY. Also, we propose an
experimental, improved approach that is a hybrid of enhanced handcrafted
features and a neural network architecture which outperformed top-performing
techniques with the same standardized evaluation benchmark applied concerning
MHealth, USCHAD, UTD-MHAD data-sets.
| [
{
"created": "Tue, 5 Jan 2021 17:33:04 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jan 2021 09:19:21 GMT",
"version": "v2"
}
] | 2023-11-22 | [
[
"Abdel-Salam",
"Reem",
""
],
[
"Mostafa",
"Rana",
""
],
[
"Hadhood",
"Mayada",
""
]
] |
2101.01710 | Prune Truong | Prune Truong and Martin Danelljan and Luc Van Gool and Radu Timofte | Learning Accurate Dense Correspondences and When to Trust Them | CVPR 2021 ORAL Code: https://github.com/PruneTruong/PDCNet
Website:https://prunetruong.com/research/pdcnet | IEEE/CVF Conference on Computer Vision and Pattern Recognition
2021, CVPR 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Establishing dense correspondences between a pair of images is an important
and general problem. However, dense flow estimation is often inaccurate in the
case of large displacements or homogeneous regions. For most applications and
down-stream tasks, such as pose estimation, image manipulation, or 3D
reconstruction, it is crucial to know when and where to trust the estimated
matches.
In this work, we aim to estimate a dense flow field relating two images,
coupled with a robust pixel-wise confidence map indicating the reliability and
accuracy of the prediction. We develop a flexible probabilistic approach that
jointly learns the flow prediction and its uncertainty. In particular, we
parametrize the predictive distribution as a constrained mixture model,
ensuring better modelling of both accurate flow predictions and outliers.
Moreover, we develop an architecture and training strategy tailored for robust
and generalizable uncertainty prediction in the context of self-supervised
training. Our approach obtains state-of-the-art results on multiple challenging
geometric matching and optical flow datasets. We further validate the
usefulness of our probabilistic confidence estimation for the task of pose
estimation. Code and models are available at
https://github.com/PruneTruong/PDCNet.
| [
{
"created": "Tue, 5 Jan 2021 18:54:11 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Apr 2021 16:57:01 GMT",
"version": "v2"
}
] | 2021-04-02 | [
[
"Truong",
"Prune",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Timofte",
"Radu",
""
]
] |
2101.01844 | Qiaojun Feng | Qiaojun Feng, Nikolay Atanasov | Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using
Joint 2D-3D Learning | 7 pages, 7 figures. Accepted at ICRA 2021 | 2021 IEEE International Conference on Robotics and Automation
(ICRA), Xi'an, China, pp. 5208-5214 | 10.1109/ICRA48506.2021.9561337 | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses outdoor terrain mapping using overhead images obtained
from an unmanned aerial vehicle. Dense depth estimation from aerial images
during flight is challenging. While feature-based localization and mapping
techniques can deliver real-time odometry and sparse points reconstruction, a
dense environment model is generally recovered offline with significant
computation and storage. This paper develops a joint 2D-3D learning approach to
reconstruct local meshes at each camera keyframe, which can be assembled into a
global environment model. Each local mesh is initialized from sparse depth
measurements. We associate image features with the mesh vertices through camera
projection and apply graph convolution to refine the mesh vertices based on
joint 2-D reprojected depth and 3-D mesh supervision. Quantitative and
qualitative evaluations using real aerial images show the potential of our
method to support environmental monitoring and surveillance applications.
| [
{
"created": "Wed, 6 Jan 2021 02:09:03 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 20:45:33 GMT",
"version": "v2"
}
] | 2022-04-27 | [
[
"Feng",
"Qiaojun",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
2101.02032 | Lu Cheng | Lu Cheng, Kush R. Varshney, Huan Liu | Socially Responsible AI Algorithms: Issues, Purposes, and Challenges | 45 pages, 8 figures | Journal of Artificial Intelligence Research 71 (2021) 1137-1181 | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the current era, people and society have grown increasingly reliant on
artificial intelligence (AI) technologies. AI has the potential to drive us
towards a future in which all of humanity flourishes. It also comes with
substantial risks for oppression and calamity. Discussions about whether we
should (re)trust AI have repeatedly emerged in recent years and in many
quarters, including industry, academia, healthcare, services, and so on.
Technologists and AI researchers have a responsibility to develop trustworthy
AI systems. They have responded with great effort to design more responsible AI
algorithms. However, existing technical solutions are narrow in scope and have
been primarily directed towards algorithms for scoring or classification tasks,
with an emphasis on fairness and unwanted bias. To build long-lasting trust
between AI and human beings, we argue that the key is to think beyond
algorithmic fairness and connect major aspects of AI that potentially cause
AI's indifferent behavior. In this survey, we provide a systematic framework of
Socially Responsible AI Algorithms that aims to examine the subjects of AI
indifference and the need for socially responsible AI algorithms, define the
objectives, and introduce the means by which we may achieve these objectives.
We further discuss how to leverage this framework to improve societal
well-being through protection, information, and prevention/mitigation.
| [
{
"created": "Fri, 1 Jan 2021 17:34:42 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jan 2021 21:18:17 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Mar 2021 20:12:58 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Jun 2021 21:21:36 GMT",
"version": "v4"
},
{
"created": "Sat, 21 Aug 2021 14:59:32 GMT",
"version": "v5"
}
] | 2021-08-24 | [
[
"Cheng",
"Lu",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Liu",
"Huan",
""
]
] |
2101.02115 | Ruben Ohana | Alessandro Cappelli, Ruben Ohana, Julien Launay, Laurent Meunier,
Iacopo Poli, Florent Krzakala | Adversarial Robustness by Design through Analog Computing and Synthetic
Gradients | null | ICASSP 2022 - IEEE International Conference on Acoustics, Speech
and Signal Processing, | 10.1109/ICASSP43922.2022.9746671 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new defense mechanism against adversarial attacks inspired by an
optical co-processor, providing robustness without compromising natural
accuracy in both white-box and black-box settings. This hardware co-processor
performs a nonlinear fixed random transformation, where the parameters are
unknown and impossible to retrieve with sufficient precision for large enough
dimensions. In the white-box setting, our defense works by obfuscating the
parameters of the random projection. Unlike other defenses relying on
obfuscated gradients, we find we are unable to build a reliable backward
differentiable approximation for obfuscated parameters. Moreover, while our
model reaches a good natural accuracy with a hybrid backpropagation - synthetic
gradient method, the same approach is suboptimal if employed to generate
adversarial examples. We find the combination of a random projection and
binarization in the optical system also improves robustness against various
types of black-box attacks. Finally, our hybrid training method builds robust
features against transfer attacks. We demonstrate our approach on a VGG-like
architecture, placing the defense on top of the convolutional features, on
CIFAR-10 and CIFAR-100. Code is available at
https://github.com/lightonai/adversarial-robustness-by-design.
| [
{
"created": "Wed, 6 Jan 2021 16:15:29 GMT",
"version": "v1"
}
] | 2022-10-03 | [
[
"Cappelli",
"Alessandro",
""
],
[
"Ohana",
"Ruben",
""
],
[
"Launay",
"Julien",
""
],
[
"Meunier",
"Laurent",
""
],
[
"Poli",
"Iacopo",
""
],
[
"Krzakala",
"Florent",
""
]
] |
2101.02136 | Vicky Kalogeiton | Manuel J. Marin-Jimenez, Vicky Kalogeiton, Pablo Medina-Suarez, and
Andrew Zisserman | LAEO-Net++: revisiting people Looking At Each Other in videos | 16 pages, 16 Figures. arXiv admin note: substantial text overlap with
arXiv:1906.05261 | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI), 2020 | 10.1109/TPAMI.2020.3048482 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Capturing the 'mutual gaze' of people is essential for understanding and
interpreting the social interactions between them. To this end, this paper
addresses the problem of detecting people Looking At Each Other (LAEO) in video
sequences. For this purpose, we propose LAEO-Net++, a new deep CNN for
determining LAEO in videos. In contrast to previous works, LAEO-Net++ takes
spatio-temporal tracks as input and reasons about the whole track. It consists
of three branches, one for each character's tracked head and one for their
relative position. Moreover, we introduce two new LAEO datasets: UCO-LAEO and
AVA-LAEO. A thorough experimental evaluation demonstrates the ability of
LAEO-Net++ to successfully determine if two people are LAEO and the temporal
window where it happens. Our model achieves state-of-the-art results on the
existing TVHID-LAEO video dataset, significantly outperforming previous
approaches. Finally, we apply LAEO-Net++ to a social network, where we
automatically infer the social relationship between pairs of people based on
the frequency and duration that they LAEO, and show that LAEO can be a useful
tool for guided search of human interactions in videos. The code is available
at https://github.com/AVAuco/laeonetplus.
| [
{
"created": "Wed, 6 Jan 2021 17:06:23 GMT",
"version": "v1"
}
] | 2021-01-07 | [
[
"Marin-Jimenez",
"Manuel J.",
""
],
[
"Kalogeiton",
"Vicky",
""
],
[
"Medina-Suarez",
"Pablo",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
2101.02185 | Seyed Sajjadi | Volkan Ustun, Rajay Kumar, Adam Reilly, Seyed Sajjadi, Andrew Miller | Adaptive Synthetic Characters for Military Training | null | 2020 Interservice/Industry Training, Simulation, and Education
Conference (I/ITSEC) | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Behaviors of the synthetic characters in current military simulations are
limited since they are generally generated by rule-based and reactive
computational models with minimal intelligence. Such computational models
cannot adapt to reflect the experience of the characters, resulting in brittle
intelligence for even the most effective behavior models devised via costly and
labor-intensive processes. Observation-based behavior model adaptation that
leverages machine learning and the experience of synthetic entities in
combination with appropriate prior knowledge can address the issues in the
existing computational behavior models to create a better training experience
in military training simulations. In this paper, we introduce a framework that
aims to create autonomous synthetic characters that can perform coherent
sequences of believable behavior while being aware of human trainees and their
needs within a training simulation. This framework brings together three
mutually complementary components. The first component is a Unity-based
simulation environment - Rapid Integration and Development Environment (RIDE) -
supporting One World Terrain (OWT) models and capable of running and supporting
machine learning experiments. The second is Shiva, a novel multi-agent
reinforcement and imitation learning framework that can interface with a
variety of simulation environments, and that can additionally utilize a variety
of learning algorithms. The final component is the Sigma Cognitive Architecture
that will augment the behavior models with symbolic and probabilistic reasoning
capabilities. We have successfully created proof-of-concept behavior models
leveraging this framework on realistic terrain as an essential step towards
bringing machine learning into military simulations.
| [
{
"created": "Wed, 6 Jan 2021 18:45:48 GMT",
"version": "v1"
}
] | 2021-01-07 | [
[
"Ustun",
"Volkan",
""
],
[
"Kumar",
"Rajay",
""
],
[
"Reilly",
"Adam",
""
],
[
"Sajjadi",
"Seyed",
""
],
[
"Miller",
"Andrew",
""
]
] |
2101.02231 | Seyed Sajjadi | Volkan Ustun, Paul S. Rosenbloom, Seyed Sajjadi, Jeremy Nuttal | Controlling Synthetic Characters in Simulations: A Case for Cognitive
Architectures and Sigma | null | Interservice/Industry Training, Simulation, and Education
Conference (I/ITSEC) 2018 | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulations, along with other similar applications like virtual worlds and
video games, require computational models of intelligence that generate
realistic and credible behavior for the participating synthetic characters.
Cognitive architectures, which are models of the fixed structure underlying
intelligent behavior in both natural and artificial systems, provide a
conceptually valid common basis, as evidenced by the current efforts towards a
standard model of the mind, to generate human-like intelligent behavior for
these synthetic characters. Sigma is a cognitive architecture and system that
strives to combine what has been learned from four decades of independent work
on symbolic cognitive architectures, probabilistic graphical models, and more
recently neural models, under its graphical architecture hypothesis. Sigma
leverages an extended form of factor graphs towards a uniform grand unification
of not only traditional cognitive capabilities but also key non-cognitive
aspects, creating unique opportunities for the construction of new kinds of
cognitive models that possess a Theory-of-Mind and that are perceptual,
autonomous, interactive, affective, and adaptive. In this paper, we will
introduce Sigma along with its diverse capabilities and then use three distinct
proof-of-concept Sigma models to highlight combinations of these capabilities:
(1) Distributional reinforcement learning models in; (2) A pair of adaptive and
interactive agent models that demonstrate rule-based, probabilistic, and social
reasoning; and (3) A knowledge-free exploration model in which an agent
leverages only architectural appraisal variables, namely attention and
curiosity, to locate an item while building up a map in a Unity environment.
| [
{
"created": "Wed, 6 Jan 2021 19:07:36 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Ustun",
"Volkan",
""
],
[
"Rosenbloom",
"Paul S.",
""
],
[
"Sajjadi",
"Seyed",
""
],
[
"Nuttal",
"Jeremy",
""
]
] |
2101.02323 | Vishwesh Nath | Vishwesh Nath, Dong Yang, Bennett A. Landman, Daguang Xu, Holger R.
Roth | Diminishing Uncertainty within the Training Pool: Active Learning for
Medical Image Segmentation | 19 pages, 13 figures, Transactions of Medical Imaging | IEEE Transactions on Medical Imaging, 2020 | 10.1109/TMI.2020.3048055 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active learning is a unique abstraction of machine learning techniques where
the model/algorithm could guide users for annotation of a set of data points
that would be beneficial to the model, unlike passive machine learning. The
primary advantage being that active learning frameworks select data points that
can accelerate the learning process of a model and can reduce the amount of
data needed to achieve full accuracy as compared to a model trained on a
randomly acquired data set. Multiple frameworks for active learning combined
with deep learning have been proposed, and the majority of them are dedicated
to classification tasks. Herein, we explore active learning for the task of
segmentation of medical imaging data sets. We investigate our proposed
framework using two datasets: 1.) MRI scans of the hippocampus, 2.) CT scans of
pancreas and tumors. This work presents a query-by-committee approach for
active learning where a joint optimizer is used for the committee. At the same
time, we propose three new strategies for active learning: 1.) increasing
frequency of uncertain data to bias the training data set; 2.) Using mutual
information among the input images as a regularizer for acquisition to ensure
diversity in the training dataset; 3.) adaptation of Dice log-likelihood for
Stein variational gradient descent (SVGD). The results indicate an improvement
in terms of data reduction by achieving full accuracy while only using 22.69 %
and 48.85 % of the available data for each dataset, respectively.
| [
{
"created": "Thu, 7 Jan 2021 01:55:48 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Nath",
"Vishwesh",
""
],
[
"Yang",
"Dong",
""
],
[
"Landman",
"Bennett A.",
""
],
[
"Xu",
"Daguang",
""
],
[
"Roth",
"Holger R.",
""
]
] |
2101.02359 | Xiangyang Li | Xiangyang Li, Yu Xia, Xiang Long, Zheng Li, Sujian Li | Exploring Text-transformers in AAAI 2021 Shared Task: COVID-19 Fake News
Detection in English | 3rd solution of 'Constraint@AAAI2021 - COVID19 Fake News Detection in
English' | First International Workshop, CONSTRAINT 2021 co-located with AAAI
2021 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we describe our system for the AAAI 2021 shared task of
COVID-19 Fake News Detection in English, where we achieved the 3rd position
with the weighted F1 score of 0.9859 on the test set. Specifically, we proposed
an ensemble method of different pre-trained language models such as BERT,
Roberta, Ernie, etc. with various training strategies including
warm-up,learning rate schedule and k-fold cross-validation. We also conduct an
extensive analysis of the samples that are not correctly classified. The code
is available
at:https://github.com/archersama/3rd-solution-COVID19-Fake-News-Detection-in-English.
| [
{
"created": "Thu, 7 Jan 2021 04:01:13 GMT",
"version": "v1"
}
] | 2021-09-24 | [
[
"Li",
"Xiangyang",
""
],
[
"Xia",
"Yu",
""
],
[
"Long",
"Xiang",
""
],
[
"Li",
"Zheng",
""
],
[
"Li",
"Sujian",
""
]
] |
2101.02442 | Clement Leroy | Cl\'ement Leroy (INTUIDOC), Eric Anquetil (INTUIDOC), Nathalie Girard
(INTUIDOC) | Drift anticipation with forgetting to improve evolving fuzzy system | null | 25th International Conference on Pattern Recognition (ICPR2020),
Jan 2021, Milan, Italy | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Working with a non-stationary stream of data requires for the analysis system
to evolve its model (the parameters as well as the structure) over time. In
particular, concept drifts can occur, which makes it necessary to forget
knowledge that has become obsolete. However, the forgetting is subjected to the
stability-plasticity dilemma, that is, increasing forgetting improve reactivity
of adapting to the new data while reducing the robustness of the system. Based
on a set of inference rules, Evolving Fuzzy Systems-EFS-have proven to be
effective in solving the data stream learning problem. However tackling the
stability-plasticity dilemma is still an open question. This paper proposes a
coherent method to integrate forgetting in Evolving Fuzzy System, based on the
recently introduced notion of concept drift anticipation. The forgetting is
applied with two methods: an exponential forgetting of the premise part and a
deferred directional forgetting of the conclusion part of EFS to preserve the
coherence between both parts. The originality of the approach consists in
applying the forgetting only in the anticipation module and in keeping the EFS
(called principal system) learned without any forgetting. Then, when a drift is
detected in the stream, a selection mechanism is proposed to replace the
obsolete parameters of the principal system with more suitable parameters of
the anticipation module. An evaluation of the proposed methods is carried out
on benchmark online datasets, with a comparison with state-of-the-art online
classifiers (Learn++.NSE, PENsemble, pclass) as well as with the original
system using different forgetting strategies.
| [
{
"created": "Thu, 7 Jan 2021 09:21:27 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Leroy",
"Clément",
"",
"INTUIDOC"
],
[
"Anquetil",
"Eric",
"",
"INTUIDOC"
],
[
"Girard",
"Nathalie",
"",
"INTUIDOC"
]
] |
2101.02480 | Tugdual Ceillier | Alex Goupilleau, Tugdual Ceillier, Marie-Caroline Corbineau | Active learning for object detection in high-resolution satellite images | null | Conference on Artificial Intelligence for Defense, Dec 2020,
Rennes, France | null | null | cs.CV cs.LG cs.NE eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning, the term active learning regroups techniques that aim at
selecting the most useful data to label from a large pool of unlabelled
examples. While supervised deep learning techniques have shown to be
increasingly efficient on many applications, they require a huge number of
labelled examples to reach operational performances. Therefore, the labelling
effort linked to the creation of the datasets required is also increasing. When
working on defense-related remote sensing applications, labelling can be
challenging due to the large areas covered and often requires military experts
who are rare and whose time is primarily dedicated to operational needs.
Limiting the labelling effort is thus of utmost importance. This study aims at
reviewing the most relevant active learning techniques to be used for object
detection on very high resolution imagery and shows an example of the value of
such techniques on a relevant operational use case: aircraft detection.
| [
{
"created": "Thu, 7 Jan 2021 10:57:38 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Goupilleau",
"Alex",
""
],
[
"Ceillier",
"Tugdual",
""
],
[
"Corbineau",
"Marie-Caroline",
""
]
] |
2101.02486 | Leonardo Maria Millefiori | Samuele Capobianco, Leonardo M. Millefiori, Nicola Forti, Paolo Braca,
and Peter Willett | Deep Learning Methods for Vessel Trajectory Prediction based on
Recurrent Neural Networks | Accepted for publications in IEEE Transactions on Aerospace and
Electronic Systems, 17 pages, 9 figures | IEEE Transactions on Aerospace and Electronic Systems, vol. 57,
no. 6, pp. 4329-4346, 2021 | 10.1109/TAES.2021.3096873 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven methods open up unprecedented possibilities for maritime
surveillance using Automatic Identification System (AIS) data. In this work, we
explore deep learning strategies using historical AIS observations to address
the problem of predicting future vessel trajectories with a prediction horizon
of several hours. We propose novel sequence-to-sequence vessel trajectory
prediction models based on encoder-decoder recurrent neural networks (RNNs)
that are trained on historical trajectory data to predict future trajectory
samples given previous observations. The proposed architecture combines Long
Short-Term Memory (LSTM) RNNs for sequence modeling to encode the observed data
and generate future predictions with different intermediate aggregation layers
to capture space-time dependencies in sequential data. Experimental results on
vessel trajectories from an AIS dataset made freely available by the Danish
Maritime Authority show the effectiveness of deep-learning methods for
trajectory prediction based on sequence-to-sequence neural networks, which
achieve better performance than baseline approaches based on linear regression
or on the Multi-Layer Perceptron (MLP) architecture. The comparative evaluation
of results shows: i) the superiority of attention pooling over static pooling
for the specific application, and ii) the remarkable performance improvement
that can be obtained with labeled trajectories, i.e., when predictions are
conditioned on a low-level context representation encoded from the sequence of
past observations, as well as on additional inputs (e.g., port of departure or
arrival) about the vessel's high-level intention, which may be available from
AIS.
| [
{
"created": "Thu, 7 Jan 2021 11:05:47 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Jun 2021 11:49:02 GMT",
"version": "v2"
}
] | 2023-01-18 | [
[
"Capobianco",
"Samuele",
""
],
[
"Millefiori",
"Leonardo M.",
""
],
[
"Forti",
"Nicola",
""
],
[
"Braca",
"Paolo",
""
],
[
"Willett",
"Peter",
""
]
] |
2101.02496 | Manuel Lagunas | Manuel Lagunas, Ana Serrano, Diego Gutierrez, Belen Masia | The joint role of geometry and illumination on material recognition | 15 pages, 16 figures, Accepted to the Journal of Vision, 2021 | Journal of Vision February 2021, Vol.21, 2 | 10.1167/jov.21.2.2 | null | cs.CV cs.AI cs.GR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Observing and recognizing materials is a fundamental part of our daily life.
Under typical viewing conditions, we are capable of effortlessly identifying
the objects that surround us and recognizing the materials they are made of.
Nevertheless, understanding the underlying perceptual processes that take place
to accurately discern the visual properties of an object is a long-standing
problem. In this work, we perform a comprehensive and systematic analysis of
how the interplay of geometry, illumination, and their spatial frequencies
affects human performance on material recognition tasks. We carry out
large-scale behavioral experiments where participants are asked to recognize
different reference materials among a pool of candidate samples. In the
different experiments, we carefully sample the information in the frequency
domain of the stimuli. From our analysis, we find significant first-order
interactions between the geometry and the illumination, of both the reference
and the candidates. In addition, we observe that simple image statistics and
higher-order image histograms do not correlate with human performance.
Therefore, we perform a high-level comparison of highly non-linear statistics
by training a deep neural network on material recognition tasks. Our results
show that such models can accurately classify materials, which suggests that
they are capable of defining a meaningful representation of material appearance
from labeled proximal image data. Last, we find preliminary evidence that these
highly non-linear models and humans may use similar high-level factors for
material recognition tasks.
| [
{
"created": "Thu, 7 Jan 2021 11:29:52 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Feb 2021 12:35:25 GMT",
"version": "v2"
}
] | 2021-02-05 | [
[
"Lagunas",
"Manuel",
""
],
[
"Serrano",
"Ana",
""
],
[
"Gutierrez",
"Diego",
""
],
[
"Masia",
"Belen",
""
]
] |
2101.02522 | Vincent Aranega | Ronie Salgado, Marcus Denker (RMOD), St\'ephane Ducasse (RMOD), Anne
Etien (RMOD), Vincent Aranega (RMOD) | Towards a Smart Data Processing and Storage Model | null | IWST20: International Workshop on Smalltalk Technologies, Sep
2020, Novi Sad, Serbia | null | null | cs.CL cs.PL cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several domains it is crucial to store and manipulate data whose origin
needs to be completely traceable to guarantee the consistency, trustworthiness
and reliability on the data itself typically for ethical and legal reasons. It
is also important to guarantee that such properties are also carried further
when such data is composed and processed into new data. In this article we
present the main requirements and theorethical problems that arise by the
design of a system supporting data with such capabilities. We present an
architecture for implementing a system as well as a prototype developed in
Pharo.
| [
{
"created": "Thu, 7 Jan 2021 12:52:11 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Salgado",
"Ronie",
"",
"RMOD"
],
[
"Denker",
"Marcus",
"",
"RMOD"
],
[
"Ducasse",
"Stéphane",
"",
"RMOD"
],
[
"Etien",
"Anne",
"",
"RMOD"
],
[
"Aranega",
"Vincent",
"",
"RMOD"
]
] |
2101.02559 | Lois Orosa | Muhammad Shafique, Mahum Naseer, Theocharis Theocharides, Christos
Kyrkou, Onur Mutlu, Lois Orosa, Jungwook Choi | Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead | Final version appears in https://ieeexplore.ieee.org/document/8979377 | IEEE Design and Test (Volume: 37, Issue: 2, April 2020): 30-57 | 10.1109/MDAT.2020.2971217 | null | cs.CR cs.AI cs.AR cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning (ML) techniques have been rapidly adopted by smart
Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) due to their powerful
decision-making capabilities. However, they are vulnerable to various security
and reliability threats, at both hardware and software levels, that compromise
their accuracy. These threats get aggravated in emerging edge ML devices that
have stringent constraints in terms of resources (e.g., compute, memory,
power/energy), and that therefore cannot employ costly security and reliability
measures. Security, reliability, and vulnerability mitigation techniques span
from network security measures to hardware protection, with an increased
interest towards formal verification of trained ML models.
This paper summarizes the prominent vulnerabilities of modern ML systems,
highlights successful defenses and mitigation techniques against these
vulnerabilities, both at the cloud (i.e., during the ML training phase) and
edge (i.e., during the ML inference stage), discusses the implications of a
resource-constrained design on the reliability and security of the system,
identifies verification methodologies to ensure correct system behavior, and
describes open research challenges for building secure and reliable ML systems
at both the edge and the cloud.
| [
{
"created": "Mon, 4 Jan 2021 20:06:56 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Shafique",
"Muhammad",
""
],
[
"Naseer",
"Mahum",
""
],
[
"Theocharides",
"Theocharis",
""
],
[
"Kyrkou",
"Christos",
""
],
[
"Mutlu",
"Onur",
""
],
[
"Orosa",
"Lois",
""
],
[
"Choi",
"Jungwook",
""
]
] |
2101.02647 | Juana Valeria Hurtado | Juana Valeria Hurtado, Laura Londo\~no, and Abhinav Valada | From Learning to Relearning: A Framework for Diminishing Bias in Social
Robot Navigation | null | Frontiers in Robotics and AI, 2021 | 10.3389/frobt.2021.650325 | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The exponentially increasing advances in robotics and machine learning are
facilitating the transition of robots from being confined to controlled
industrial spaces to performing novel everyday tasks in domestic and urban
environments. In order to make the presence of robots safe as well as
comfortable for humans, and to facilitate their acceptance in public
environments, they are often equipped with social abilities for navigation and
interaction. Socially compliant robot navigation is increasingly being learned
from human observations or demonstrations. We argue that these techniques that
typically aim to mimic human behavior do not guarantee fair behavior. As a
consequence, social navigation models can replicate, promote, and amplify
societal unfairness such as discrimination and segregation. In this work, we
investigate a framework for diminishing bias in social robot navigation models
so that robots are equipped with the capability to plan as well as adapt their
paths based on both physical and social demands. Our proposed framework
consists of two components: \textit{learning} which incorporates social context
into the learning process to account for safety and comfort, and
\textit{relearning} to detect and correct potentially harmful outcomes before
the onset. We provide both technological and societal analysis using three
diverse case studies in different social scenarios of interaction. Moreover, we
present ethical implications of deploying robots in social environments and
propose potential solutions. Through this study, we highlight the importance
and advocate for fairness in human-robot interactions in order to promote more
equitable social relationships, roles, and dynamics and consequently positively
influence our society.
| [
{
"created": "Thu, 7 Jan 2021 17:42:35 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 18:42:23 GMT",
"version": "v2"
}
] | 2021-03-04 | [
[
"Hurtado",
"Juana Valeria",
""
],
[
"Londoño",
"Laura",
""
],
[
"Valada",
"Abhinav",
""
]
] |
2101.02767 | Joris Gu\'erin | Joris Guerin, Stephane Thiery, Eric Nyiri, Olivier Gibaru, Byron Boots | Combining pretrained CNN feature extractors to enhance clustering of
complex natural images | 21 pages, 16 figures, 10 tables, preprint of our paper published in
Neurocomputing | Guerin, J., Thiery, S., Nyiri, E., Gibaru, O., & Boots, B. (2021).
Combining pretrained CNN feature extractors to enhance clustering of complex
natural images. Neurocomputing, 423, 551-571 | 10.1016/j.neucom.2020.10.068 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, a common starting point for solving complex unsupervised image
classification tasks is to use generic features, extracted with deep
Convolutional Neural Networks (CNN) pretrained on a large and versatile dataset
(ImageNet). However, in most research, the CNN architecture for feature
extraction is chosen arbitrarily, without justification. This paper aims at
providing insight on the use of pretrained CNN features for image clustering
(IC). First, extensive experiments are conducted and show that, for a given
dataset, the choice of the CNN architecture for feature extraction has a huge
impact on the final clustering. These experiments also demonstrate that proper
extractor selection for a given IC task is difficult. To solve this issue, we
propose to rephrase the IC problem as a multi-view clustering (MVC) problem
that considers features extracted from different architectures as different
"views" of the same data. This approach is based on the assumption that
information contained in the different CNN may be complementary, even when
pretrained on the same data. We then propose a multi-input neural network
architecture that is trained end-to-end to solve the MVC problem effectively.
This approach is tested on nine natural image datasets, and produces
state-of-the-art results for IC.
| [
{
"created": "Thu, 7 Jan 2021 21:23:04 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Guerin",
"Joris",
""
],
[
"Thiery",
"Stephane",
""
],
[
"Nyiri",
"Eric",
""
],
[
"Gibaru",
"Olivier",
""
],
[
"Boots",
"Byron",
""
]
] |
2101.02780 | Tanujay Saha | Tanujay Saha, Najwa Aaraj, Neel Ajjarapu, Niraj K. Jha | SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning | This article has been accepted in IEEE Transactions on Emerging
Topics in Computing. 17 pages, 12 figures, IEEE copyright | IEEE Transactions on Emerging Topics in Computing, 2021 | 10.1109/TETC.2021.3050733 | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are
increasingly being deployed across multiple functionalities, ranging from
healthcare devices and wearables to critical infrastructures, e.g., nuclear
power plants, autonomous vehicles, smart cities, and smart homes. These devices
are inherently not secure across their comprehensive software, hardware, and
network stacks, thus presenting a large attack surface that can be exploited by
hackers. In this article, we present an innovative technique for detecting
unknown system vulnerabilities, managing these vulnerabilities, and improving
incident response when such vulnerabilities are exploited. The novelty of this
approach lies in extracting intelligence from known real-world CPS/IoT attacks,
representing them in the form of regular expressions, and employing machine
learning (ML) techniques on this ensemble of regular expressions to generate
new attack vectors and security vulnerabilities. Our results show that 10 new
attack vectors and 122 new vulnerability exploits can be successfully generated
that have the potential to exploit a CPS or an IoT ecosystem. The ML
methodology achieves an accuracy of 97.4% and enables us to predict these
attacks efficiently with an 87.2% reduction in the search space. We demonstrate
the application of our method to the hacking of the in-vehicle network of a
connected car. To defend against the known attacks and possible novel exploits,
we discuss a defense-in-depth mechanism for various classes of attacks and the
classification of data targeted by such attacks. This defense mechanism
optimizes the cost of security measures based on the sensitivity of the
protected resource, thus incentivizing its adoption in real-world CPS/IoT by
cybersecurity practitioners.
| [
{
"created": "Thu, 7 Jan 2021 22:01:30 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Oct 2022 22:02:25 GMT",
"version": "v2"
}
] | 2022-10-21 | [
[
"Saha",
"Tanujay",
""
],
[
"Aaraj",
"Najwa",
""
],
[
"Ajjarapu",
"Neel",
""
],
[
"Jha",
"Niraj K.",
""
]
] |
2101.02797 | Nisreen Ali | Nisreen AbdAllah and Serestina Viriri | Off-Line Arabic Handwritten Words Segmentation using Morphological
Operators | 16 pages,27 figures | Signal & Image Processing: An International Journal (SIPIJ)
Vol.11, No.6, December 2020 | 10.5121/sipij.2020.11602 | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The main aim of this study is the assessment and discussion of a model for
hand-written Arabic through segmentation. The framework is proposed based on
three steps: pre-processing, segmentation, and evaluation. In the
pre-processing step, morphological operators are applied for Connecting Gaps
(CGs) in written words. Gaps happen when pen lifting-off during writing,
scanning documents, or while converting images to binary type. In the
segmentation step, first removed the small diacritics then bounded a connected
component to segment offline words. Huge data was utilized in the proposed
model for applying a variety of handwriting styles so that to be more
compatible with real-life applications. Consequently, on the automatic
evaluation stage, selected randomly 1,131 images from the IESK-ArDB database,
and then segmented into sub-words. After small gaps been connected, the model
performance evaluation had been reached 88% against the standard ground truth
of the database. The proposed model achieved the highest accuracy when compared
with the related works.
| [
{
"created": "Thu, 7 Jan 2021 23:38:53 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"AbdAllah",
"Nisreen",
""
],
[
"Viriri",
"Serestina",
""
]
] |
2101.02991 | Pathan Faisal Khan | Faisal Khan and Debdeep Bose | Artificial Intelligence enabled Smart Learning | 4 | ETH Learning and Teaching Journal: ICED 2020 Proceedings (2020)
153-156 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) is a discipline of computer science that deals
with machine intelligence. It is essential to bring AI into the context of
learning because it helps in analysing the enormous amounts of data that is
collected from individual students, teachers and academic staff. The major
priorities of implementing AI in education are making innovative use of
existing digital technologies for learning, and teaching practices that
significantly improve traditional educational methods. The main problem with
traditional learning is that it cannot be suited to every student in class.
Some students may grasp the concepts well, while some may have difficulties in
understanding them and some may be more auditory or visual learners. The World
Bank report on education has indicated that the learning gap created by this
problem causes many students to drop out (World Development Report, 2018).
Personalised learning has been able to solve this grave problem.
| [
{
"created": "Fri, 8 Jan 2021 12:49:33 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Khan",
"Faisal",
""
],
[
"Bose",
"Debdeep",
""
]
] |
2101.03013 | Iknoor Singh | Iknoor Singh, Carolina Scarton, Kalina Bontcheva | Multistage BiCross encoder for multilingual access to COVID-19 health
information | null | PLOS ONE 2021 | 10.1371/journal.pone.0256874 | null | cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Coronavirus (COVID-19) pandemic has led to a rapidly growing 'infodemic'
of health information online. This has motivated the need for accurate semantic
search and retrieval of reliable COVID-19 information across millions of
documents, in multiple languages. To address this challenge, this paper
proposes a novel high precision and high recall neural Multistage BiCross
encoder approach. It is a sequential three-stage ranking pipeline which uses
the Okapi BM25 retrieval algorithm and transformer-based bi-encoder and
cross-encoder to effectively rank the documents with respect to the given
query. We present experimental results from our participation in the
Multilingual Information Access (MLIA) shared task on COVID-19 multilingual
semantic search. The independently evaluated MLIA results validate our approach
and demonstrate that it outperforms other state-of-the-art approaches according
to nearly all evaluation metrics in cases of both monolingual and bilingual
runs.
| [
{
"created": "Fri, 8 Jan 2021 13:59:26 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jan 2021 20:38:23 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Aug 2021 15:49:10 GMT",
"version": "v3"
}
] | 2022-05-31 | [
[
"Singh",
"Iknoor",
""
],
[
"Scarton",
"Carolina",
""
],
[
"Bontcheva",
"Kalina",
""
]
] |
2101.03037 | Bobak Kiani | Bobak Toussi Kiani, Giacomo De Palma, Milad Marvian, Zi-Wen Liu, Seth
Lloyd | Learning quantum data with the quantum Earth Mover's distance | null | Quantum Science and Technology 7(4), 045002 (2022) | 10.1088/2058-9565/ac79c9 | null | quant-ph cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantifying how far the output of a learning algorithm is from its target is
an essential task in machine learning. However, in quantum settings, the loss
landscapes of commonly used distance metrics often produce undesirable outcomes
such as poor local minima and exponentially decaying gradients. To overcome
these obstacles, we consider here the recently proposed quantum earth mover's
(EM) or Wasserstein-1 distance as a quantum analog to the classical EM
distance. We show that the quantum EM distance possesses unique properties, not
found in other commonly used quantum distance metrics, that make quantum
learning more stable and efficient. We propose a quantum Wasserstein generative
adversarial network (qWGAN) which takes advantage of the quantum EM distance
and provides an efficient means of performing learning on quantum data. We
provide examples where our qWGAN is capable of learning a diverse set of
quantum data with only resources polynomial in the number of qubits.
| [
{
"created": "Fri, 8 Jan 2021 14:33:19 GMT",
"version": "v1"
},
{
"created": "Mon, 16 May 2022 13:14:46 GMT",
"version": "v2"
}
] | 2022-07-07 | [
[
"Kiani",
"Bobak Toussi",
""
],
[
"De Palma",
"Giacomo",
""
],
[
"Marvian",
"Milad",
""
],
[
"Liu",
"Zi-Wen",
""
],
[
"Lloyd",
"Seth",
""
]
] |
2101.03154 | Fanjie Kong | Fanjie Kong, Xiao-yang Liu, Ricardo Henao | Quantum Tensor Network in Machine Learning: An Application to Tiny
Object Classification | 8 pages, 7 figures | https://tensorworkshop.github.io/NeurIPS2020/CFP.html | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tiny object classification problem exists in many machine learning
applications like medical imaging or remote sensing, where the object of
interest usually occupies a small region of the whole image. It is challenging
to design an efficient machine learning model with respect to tiny object of
interest. Current neural network structures are unable to deal with tiny object
efficiently because they are mainly developed for images featured by large
scale objects. However, in quantum physics, there is a great theoretical
foundation guiding us to analyze the target function for image classification
regarding to specific objects size ratio. In our work, we apply Tensor Networks
to solve this arising tough machine learning problem. First, we summarize the
previous work that connects quantum spin model to image classification and
bring the theory into the scenario of tiny object classification. Second, we
propose using 2D multi-scale entanglement renormalization ansatz (MERA) to
classify tiny objects in image. In the end, our experimental results indicate
that tensor network models are effective for tiny object classification problem
and potentially will beat state-of-the-art. Our codes will be available online
https://github.com/timqqt/MERA_Image_Classification.
| [
{
"created": "Fri, 8 Jan 2021 18:33:52 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Kong",
"Fanjie",
""
],
[
"Liu",
"Xiao-yang",
""
],
[
"Henao",
"Ricardo",
""
]
] |
2101.03169 | Wen Liu | Maohan Liang, Ryan Wen Liu, Shichen Li, Zhe Xiao, Xin Liu, Feng Lu | An Unsupervised Learning Method with Convolutional Auto-Encoder for
Vessel Trajectory Similarity Computation | 22 pages, 16 figures | Ocean Engineering, 2021 | 10.1016/j.oceaneng.2021.108803 | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | To achieve reliable mining results for massive vessel trajectories, one of
the most important challenges is how to efficiently compute the similarities
between different vessel trajectories. The computation of vessel trajectory
similarity has recently attracted increasing attention in the maritime data
mining research community. However, traditional shape- and warping-based
methods often suffer from several drawbacks such as high computational cost and
sensitivity to unwanted artifacts and non-uniform sampling rates, etc. To
eliminate these drawbacks, we propose an unsupervised learning method which
automatically extracts low-dimensional features through a convolutional
auto-encoder (CAE). In particular, we first generate the informative trajectory
images by remapping the raw vessel trajectories into two-dimensional matrices
while maintaining the spatio-temporal properties. Based on the massive vessel
trajectories collected, the CAE can learn the low-dimensional representations
of informative trajectory images in an unsupervised manner. The trajectory
similarity is finally equivalent to efficiently computing the similarities
between the learned low-dimensional features, which strongly correlate with the
raw vessel trajectories. Comprehensive experiments on realistic data sets have
demonstrated that the proposed method largely outperforms traditional
trajectory similarity computation methods in terms of efficiency and
effectiveness. The high-quality trajectory clustering performance could also be
guaranteed according to the CAE-based trajectory similarity computation
results.
| [
{
"created": "Sun, 10 Jan 2021 04:42:11 GMT",
"version": "v1"
}
] | 2021-06-11 | [
[
"Liang",
"Maohan",
""
],
[
"Liu",
"Ryan Wen",
""
],
[
"Li",
"Shichen",
""
],
[
"Xiao",
"Zhe",
""
],
[
"Liu",
"Xin",
""
],
[
"Lu",
"Feng",
""
]
] |
2101.03198 | Badri Narayanan | Badri Narayanan, Mohamed Saadeldin, Paul Albert, Kevin McGuinness, and
Brian Mac Namee | Extracting Pasture Phenotype and Biomass Percentages using Weakly
Supervised Multi-target Deep Learning on a Small Dataset | null | Irish Machine Vision and Image Processing Conference (2020) 21-28 | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The dairy industry uses clover and grass as fodder for cows. Accurate
estimation of grass and clover biomass yield enables smart decisions in
optimizing fertilization and seeding density, resulting in increased
productivity and positive environmental impact. Grass and clover are usually
planted together, since clover is a nitrogen-fixing plant that brings nutrients
to the soil. Adjusting the right percentages of clover and grass in a field
reduces the need for external fertilization. Existing approaches for estimating
the grass-clover composition of a field are expensive and time consuming -
random samples of the pasture are clipped and then the components are
physically separated to weigh and calculate percentages of dry grass, clover
and weeds in each sample. There is growing interest in developing novel deep
learning based approaches to non-destructively extract pasture phenotype
indicators and biomass yield predictions of different plant species from
agricultural imagery collected from the field. Providing these indicators and
predictions from images alone remains a significant challenge. Heavy occlusions
in the dense mixture of grass, clover and weeds make it difficult to estimate
each component accurately. Moreover, although supervised deep learning models
perform well with large datasets, it is tedious to acquire large and diverse
collections of field images with precise ground truth for different biomass
yields. In this paper, we demonstrate that applying data augmentation and
transfer learning is effective in predicting multi-target biomass percentages
of different plant species, even with a small training dataset. The scheme
proposed in this paper used a training set of only 261 images and provided
predictions of biomass percentages of grass, clover, white clover, red clover,
and weeds with mean absolute error of 6.77%, 6.92%, 6.21%, 6.89%, and 4.80%
respectively.
| [
{
"created": "Fri, 8 Jan 2021 19:41:46 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Narayanan",
"Badri",
""
],
[
"Saadeldin",
"Mohamed",
""
],
[
"Albert",
"Paul",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"Mac Namee",
"Brian",
""
]
] |
2101.03221 | Stefano Martina | Stefano Martina, Stefano Gherardini, Filippo Caruso | Machine learning classification of non-Markovian noise disturbing
quantum dynamics | 19 pages, 3 figures, 3 tables; v3: Changed title and improved
presentation of the results | Physica Scripta 98 (3), 035104 (2023) | 10.1088/1402-4896/acb39b | null | quant-ph cond-mat.dis-nn cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | In this paper machine learning and artificial neural network models are
proposed for the classification of external noise sources affecting a given
quantum dynamics. For this purpose, we train and then validate support vector
machine, multi-layer perceptron and recurrent neural network models with
different complexity and accuracy, to solve supervised binary classification
problems. As a result, we demonstrate the high efficacy of such tools in
classifying noisy quantum dynamics using simulated data sets from different
realizations of the quantum system dynamics. In addition, we show that for a
successful classification one just needs to measure, in a sequence of discrete
time instants, the probabilities that the analysed quantum system is in one of
the allowed positions or energy configurations. Albeit the training of machine
learning models is here performed on synthetic data, our approach is expected
to find application in experimental schemes, as e.g. for the noise benchmarking
of noisy intermediate-scale quantum devices.
| [
{
"created": "Fri, 8 Jan 2021 20:56:56 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2022 12:49:06 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Feb 2023 11:23:13 GMT",
"version": "v3"
}
] | 2023-02-17 | [
[
"Martina",
"Stefano",
""
],
[
"Gherardini",
"Stefano",
""
],
[
"Caruso",
"Filippo",
""
]
] |
2101.03553 | Sayar Ghosh Roy | Sayar Ghosh Roy, Nikhil Pinnaparaju, Risubh Jain, Manish Gupta,
Vasudeva Varma | Summaformers @ LaySumm 20, LongSumm 20 | Proceedings of the First Workshop on Scholarly Document Processing
(SDP) at EMNLP 2020 | In Proceedings of the First Workshop on Scholarly Document
Processing, pages 336 - 343, 2020, Online. Association for Computational
Linguistics | 10.18653/v1/2020.sdp-1.39 | IIIT/TR/2020/75 | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automatic text summarization has been widely studied as an important task in
natural language processing. Traditionally, various feature engineering and
machine learning based systems have been proposed for extractive as well as
abstractive text summarization. Recently, deep learning based, specifically
Transformer-based systems have been immensely popular. Summarization is a
cognitively challenging task - extracting summary worthy sentences is
laborious, and expressing semantics in brief when doing abstractive
summarization is complicated. In this paper, we specifically look at the
problem of summarizing scientific research papers from multiple domains. We
differentiate between two types of summaries, namely, (a) LaySumm: A very short
summary that captures the essence of the research paper in layman terms
restricting overtly specific technical jargon and (b) LongSumm: A much longer
detailed summary aimed at providing specific insights into various ideas
touched upon in the paper. While leveraging latest Transformer-based models,
our systems are simple, intuitive and based on how specific paper sections
contribute to human summaries of the two types described above. Evaluations
against gold standard summaries using ROUGE metrics prove the effectiveness of
our approach. On blind test corpora, our system ranks first and third for the
LongSumm and LaySumm tasks respectively.
| [
{
"created": "Sun, 10 Jan 2021 13:48:12 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Roy",
"Sayar Ghosh",
""
],
[
"Pinnaparaju",
"Nikhil",
""
],
[
"Jain",
"Risubh",
""
],
[
"Gupta",
"Manish",
""
],
[
"Varma",
"Vasudeva",
""
]
] |
2101.03678 | Yan Qin | Xuewen Zhang, Yan Qin, Chau Yuen (Fellow IEEE), Lahiru Jayasinghe, and
Xiang Liu | Time-Series Regeneration with Convolutional Recurrent Generative
Adversarial Network for Remaining Useful Life Estimation | null | This paper has been accetped by IEEE Transactions on Industrial
Informatics in Dec. 2020 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | For health prognostic task, ever-increasing efforts have been focused on
machine learning-based methods, which are capable of yielding accurate
remaining useful life (RUL) estimation for industrial equipment or components
without exploring the degradation mechanism. A prerequisite ensuring the
success of these methods depends on a wealth of run-to-failure data, however,
run-to-failure data may be insufficient in practice. That is, conducting a
substantial amount of destructive experiments not only is high costs, but also
may cause catastrophic consequences. Out of this consideration, an enhanced RUL
framework focusing on data self-generation is put forward for both non-cyclic
and cyclic degradation patterns for the first time. It is designed to enrich
data from a data-driven way, generating realistic-like time-series to enhance
current RUL methods. First, high-quality data generation is ensured through the
proposed convolutional recurrent generative adversarial network (CR-GAN), which
adopts a two-channel fusion convolutional recurrent neural network. Next, a
hierarchical framework is proposed to combine generated data into current RUL
estimation methods. Finally, the efficacy of the proposed method is verified
through both non-cyclic and cyclic degradation systems. With the enhanced RUL
framework, an aero-engine system following non-cyclic degradation has been
tested using three typical RUL models. State-of-art RUL estimation results are
achieved by enhancing capsule network with generated time-series. Specifically,
estimation errors evaluated by the index score function have been reduced by
21.77%, and 32.67% for the two employed operating conditions, respectively.
Besides, the estimation error is reduced to zero for the Lithium-ion battery
system, which presents cyclic degradation.
| [
{
"created": "Mon, 11 Jan 2021 02:44:34 GMT",
"version": "v1"
}
] | 2021-01-13 | [
[
"Zhang",
"Xuewen",
"",
"Fellow IEEE"
],
[
"Qin",
"Yan",
"",
"Fellow IEEE"
],
[
"Yuen",
"Chau",
"",
"Fellow IEEE"
],
[
"Jayasinghe",
"Lahiru",
""
],
[
"Liu",
"Xiang",
""
]
] |
2101.03916 | Sourav Ghosh | Sourav Ghosh, Sourabh Vasant Gothe, Chandramouli Sanchi, Barath Raj
Kandur Raja | edATLAS: An Efficient Disambiguation Algorithm for Texting in Languages
with Abugida Scripts | Published in 2021 IEEE 15th International Conference on Semantic
Computing (ICSC) | 2021 IEEE 15th International Conference on Semantic Computing
(ICSC), Laguna Hills, CA, USA, 2021, pp. 325-332 | 10.1109/ICSC50631.2021.00061 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Abugida refers to a phonogram writing system where each syllable is
represented using a single consonant or typographic ligature, along with a
default vowel or optional diacritic(s) to denote other vowels. However, texting
in these languages has some unique challenges in spite of the advent of devices
with soft keyboard supporting custom key layouts. The number of characters in
these languages is large enough to require characters to be spread over
multiple views in the layout. Having to switch between views many times to type
a single word hinders the natural thought process. This prevents popular usage
of native keyboard layouts. On the other hand, supporting romanized scripts
(native words transcribed using Latin characters) with language model based
suggestions is also set back by the lack of uniform romanization rules.
To this end, we propose a disambiguation algorithm and showcase its
usefulness in two novel mutually non-exclusive input methods for languages
natively using the abugida writing system: (a) disambiguation of ambiguous
input for abugida scripts, and (b) disambiguation of word variants in romanized
scripts. We benchmark these approaches using public datasets, and show an
improvement in typing speed by 19.49%, 25.13%, and 14.89%, in Hindi, Bengali,
and Thai, respectively, using Ambiguous Input, owing to the human ease of
locating keys combined with the efficiency of our inference method. Our Word
Variant Disambiguation (WDA) maps valid variants of romanized words, previously
treated as Out-of-Vocab, to a vocabulary of 100k words with high accuracy,
leading to an increase in Error Correction F1 score by 10.03% and Next Word
Prediction (NWP) by 62.50% on average.
| [
{
"created": "Tue, 5 Jan 2021 03:16:34 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 19:07:01 GMT",
"version": "v2"
}
] | 2021-03-31 | [
[
"Ghosh",
"Sourav",
""
],
[
"Gothe",
"Sourabh Vasant",
""
],
[
"Sanchi",
"Chandramouli",
""
],
[
"Raja",
"Barath Raj Kandur",
""
]
] |
2101.03929 | Shaofei Huang | Shaofei Huang, Si Liu, Tianrui Hui, Jizhong Han, Bo Li, Jiashi Feng
and Shuicheng Yan | ORDNet: Capturing Omni-Range Dependencies for Scene Parsing | Published at TIP | IEEE Transactions on Image Processing, 2020, 29: 8251-8263 | 10.1109/TIP.2020.3013142 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning to capture dependencies between spatial positions is essential to
many visual tasks, especially the dense labeling problems like scene parsing.
Existing methods can effectively capture long-range dependencies with
self-attention mechanism while short ones by local convolution. However, there
is still much gap between long-range and short-range dependencies, which
largely reduces the models' flexibility in application to diverse spatial
scales and relationships in complicated natural scene images. To fill such a
gap, we develop a Middle-Range (MR) branch to capture middle-range dependencies
by restricting self-attention into local patches. Also, we observe that the
spatial regions which have large correlations with others can be emphasized to
exploit long-range dependencies more accurately, and thus propose a Reweighed
Long-Range (RLR) branch. Based on the proposed MR and RLR branches, we build an
Omni-Range Dependencies Network (ORDNet) which can effectively capture short-,
middle- and long-range dependencies. Our ORDNet is able to extract more
comprehensive context information and well adapt to complex spatial variance in
scene images. Extensive experiments show that our proposed ORDNet outperforms
previous state-of-the-art methods on three scene parsing benchmarks including
PASCAL Context, COCO Stuff and ADE20K, demonstrating the superiority of
capturing omni-range dependencies in deep models for scene parsing task.
| [
{
"created": "Mon, 11 Jan 2021 14:51:11 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Huang",
"Shaofei",
""
],
[
"Liu",
"Si",
""
],
[
"Hui",
"Tianrui",
""
],
[
"Han",
"Jizhong",
""
],
[
"Li",
"Bo",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
2101.03963 | Sourav Ghosh | Sourabh Vasant Gothe, Sourav Ghosh, Sharmila Mani, Guggilla Bhanodai,
Ankur Agarwal, Chandramouli Sanchi | Language Detection Engine for Multilingual Texting on Mobile Devices | 2020 IEEE 14th International Conference on Semantic Computing (ICSC).
Accessible at https://ieeexplore.ieee.org/document/9031474 | 2020 IEEE 14th International Conference on Semantic Computing
(ICSC), San Diego, CA, USA, 2020, pp. 279-286 | 10.1109/ICSC.2020.00057 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | More than 2 billion mobile users worldwide type in multiple languages in the
soft keyboard. On a monolingual keyboard, 38% of falsely auto-corrected words
are valid in another language. This can be easily avoided by detecting the
language of typed words and then validating it in its respective language.
Language detection is a well-known problem in natural language processing. In
this paper, we present a fast, light-weight and accurate Language Detection
Engine (LDE) for multilingual typing that dynamically adapts to user intended
language in real-time. We propose a novel approach where the fusion of
character N-gram model and logistic regression based selector model is used to
identify the language. Additionally, we present a unique method of reducing the
inference time significantly by parameter reduction technique. We also discuss
various optimizations fabricated across LDE to resolve ambiguity in input text
among the languages with the same character pattern. Our method demonstrates an
average accuracy of 94.5% for Indian languages in Latin script and that of 98%
for European languages on the code-switched data. This model outperforms
fastText by 60.39% and ML-Kit by 23.67% in F1 score for European languages. LDE
is faster on mobile device with an average inference time of 25.91
microseconds.
| [
{
"created": "Thu, 7 Jan 2021 16:49:47 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Gothe",
"Sourabh Vasant",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"Mani",
"Sharmila",
""
],
[
"Bhanodai",
"Guggilla",
""
],
[
"Agarwal",
"Ankur",
""
],
[
"Sanchi",
"Chandramouli",
""
]
] |
2101.03966 | Anis Rahman | Maryam Qamar Butt and Anis Ur Rahman | Audiovisual Saliency Prediction in Uncategorized Video Sequences based
on Audio-Video Correlation | 9 pages, 2 figures, 4 tables | IEEE Access 11 (2023) 15460-15470 | 10.1109/ACCESS.2023.3244191 | null | eess.IV cs.CV eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Substantial research has been done in saliency modeling to develop
intelligent machines that can perceive and interpret their surroundings. But
existing models treat videos as merely image sequences excluding any audio
information, unable to cope with inherently varying content. Based on the
hypothesis that an audiovisual saliency model will be an improvement over
traditional saliency models for natural uncategorized videos, this work aims to
provide a generic audio/video saliency model augmenting a visual saliency map
with an audio saliency map computed by synchronizing low-level audio and visual
features. The proposed model was evaluated using different criteria against eye
fixations data for a publicly available DIEM video dataset. The results show
that the model outperformed two state-of-the-art visual saliency models.
| [
{
"created": "Thu, 7 Jan 2021 14:22:29 GMT",
"version": "v1"
}
] | 2023-02-27 | [
[
"Butt",
"Maryam Qamar",
""
],
[
"Rahman",
"Anis Ur",
""
]
] |
2101.03967 | Sourav Ghosh | Sharmila Mani, Sourabh Vasant Gothe, Sourav Ghosh, Ajay Kumar Mishra,
Prakhar Kulshreshtha, Bhargavi M, Muthu Kumaran | Real-Time Optimized N-gram For Mobile Devices | 2019 IEEE 13th International Conference on Semantic Computing (ICSC).
Accessible at https://ieeexplore.ieee.org/document/8665639 | 2019 IEEE 13th International Conference on Semantic Computing
(ICSC), Newport Beach, CA, USA, 2019, pp. 87-92 | 10.1109/ICOSC.2019.8665639 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the increasing number of mobile devices, there has been continuous
research on generating optimized Language Models (LMs) for soft keyboard. In
spite of advances in this domain, building a single LM for low-end feature
phones as well as high-end smartphones is still a pressing need. Hence, we
propose a novel technique, Optimized N-gram (Op-Ngram), an end-to-end N-gram
pipeline that utilises mobile resources efficiently for faster Word Completion
(WC) and Next Word Prediction (NWP). Op-Ngram applies Stupid Backoff and
pruning strategies to generate a light-weight model. The LM loading time on
mobile is linear with respect to model size. We observed that Op-Ngram gives
37% improvement in Language Model (LM)-ROM size, 76% in LM-RAM size, 88% in
loading time and 89% in average suggestion time as compared to SORTED array
variant of BerkeleyLM. Moreover, our method shows significant performance
improvement over KenLM as well.
| [
{
"created": "Thu, 7 Jan 2021 14:51:26 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Mani",
"Sharmila",
""
],
[
"Gothe",
"Sourabh Vasant",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"Mishra",
"Ajay Kumar",
""
],
[
"Kulshreshtha",
"Prakhar",
""
],
[
"M",
"Bhargavi",
""
],
[
"Kumaran",
"Muthu",
""
]
] |
2101.04017 | Antonio Lieto | Antonio Lieto, Gian Luca Pozzato, Stefano Zoia, Viviana Patti, Rossana
Damiano | A Commonsense Reasoning Framework for Explanatory Emotion Attribution,
Generation and Re-classification | 50 pages. This work has been partially funded from the European
Research Council (ERC) under the European Union'sHorizon 2020 research and
innovation programme, grant agreement n{\deg}870811 | Knowledge-Based Systems, 2021 | 10.1016/j.knosys.2021.107166 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present DEGARI (Dynamic Emotion Generator And ReclassIfier), an
explainable system for emotion attribution and recommendation. This system
relies on a recently introduced commonsense reasoning framework, the TCL logic,
which is based on a human-like procedure for the automatic generation of novel
concepts in a Description Logics knowledge base. Starting from an ontological
formalization of emotions based on the Plutchik model, known as ArsEmotica, the
system exploits the logic TCL to automatically generate novel commonsense
semantic representations of compound emotions (e.g. Love as derived from the
combination of Joy and Trust according to Plutchik). The generated emotions
correspond to prototypes, i.e. commonsense representations of given concepts,
and have been used to reclassify emotion-related contents in a variety of
artistic domains, ranging from art datasets to the editorial contents available
in RaiPlay, the online platform of RAI Radiotelevisione Italiana (the Italian
public broadcasting company). We show how the reported results (evaluated in
the light of the obtained reclassifications, the user ratings assigned to such
reclassifications, and their explainability) are encouraging, and pave the way
to many further research directions.
| [
{
"created": "Mon, 11 Jan 2021 16:44:38 GMT",
"version": "v1"
},
{
"created": "Fri, 14 May 2021 13:58:59 GMT",
"version": "v2"
},
{
"created": "Wed, 26 May 2021 13:48:08 GMT",
"version": "v3"
},
{
"created": "Mon, 31 May 2021 20:53:30 GMT",
"version": "v4"
},
{
"created": "Wed, 2 Jun 2021 11:10:56 GMT",
"version": "v5"
}
] | 2021-06-03 | [
[
"Lieto",
"Antonio",
""
],
[
"Pozzato",
"Gian Luca",
""
],
[
"Zoia",
"Stefano",
""
],
[
"Patti",
"Viviana",
""
],
[
"Damiano",
"Rossana",
""
]
] |
2101.04086 | An Nguyen | An Nguyen, Stefan Foerstel, Thomas Kittler, Andrey Kurzyukov, Leo
Schwinn, Dario Zanca, Tobias Hipp, Da Jun Sun, Michael Schrapp, Eva Rothgang,
Bjoern Eskofier | System Design for a Data-driven and Explainable Customer Sentiment
Monitor | null | IEEE Access 9 (2021): 117140-117152 | 10.1109/ACCESS.2021.3106791 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most important goal of customer services is to keep the customer
satisfied. However, service resources are always limited and must be
prioritized. Therefore, it is important to identify customers who potentially
become unsatisfied and might lead to escalations. Today this prioritization of
customers is often done manually. Data science on IoT data (esp. log data) for
machine health monitoring, as well as analytics on enterprise data for customer
relationship management (CRM) have mainly been researched and applied
independently. In this paper, we present a framework for a data-driven decision
support system which combines IoT and enterprise data to model customer
sentiment. Such decision support systems can help to prioritize customers and
service resources to effectively troubleshoot problems or even avoid them. The
framework is applied in a real-world case study with a major medical device
manufacturer. This includes a fully automated and interpretable machine
learning pipeline designed to meet the requirements defined with domain experts
and end users. The overall framework is currently deployed, learns and
evaluates predictive models from terabytes of IoT and enterprise data to
actively monitor the customer sentiment for a fleet of thousands of high-end
medical devices. Furthermore, we provide an anonymized industrial benchmark
dataset for the research community.
| [
{
"created": "Mon, 11 Jan 2021 18:29:50 GMT",
"version": "v1"
}
] | 2022-01-11 | [
[
"Nguyen",
"An",
""
],
[
"Foerstel",
"Stefan",
""
],
[
"Kittler",
"Thomas",
""
],
[
"Kurzyukov",
"Andrey",
""
],
[
"Schwinn",
"Leo",
""
],
[
"Zanca",
"Dario",
""
],
[
"Hipp",
"Tobias",
""
],
[
"Sun",
"Da Jun",
""
],
[
"Schrapp",
"Michael",
""
],
[
"Rothgang",
"Eva",
""
],
[
"Eskofier",
"Bjoern",
""
]
] |
2101.04255 | Dominic Widdows | Dominic Widdows and Kirsty Kitto and Trevor Cohen | Quantum Mathematics in Artificial Intelligence | Adding journal reference, recommended by JAIR editors upon
publication | Journal of Artificial Intelligence Research 72 (2021) 1307-1341 | 10.1613/jair.1.12702 | null | cs.AI cs.CL cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | In the decade since 2010, successes in artificial intelligence have been at
the forefront of computer science and technology, and vector space models have
solidified a position at the forefront of artificial intelligence. At the same
time, quantum computers have become much more powerful, and announcements of
major advances are frequently in the news.
The mathematical techniques underlying both these areas have more in common
than is sometimes realized. Vector spaces took a position at the axiomatic
heart of quantum mechanics in the 1930s, and this adoption was a key motivation
for the derivation of logic and probability from the linear geometry of vector
spaces. Quantum interactions between particles are modelled using the tensor
product, which is also used to express objects and operations in artificial
neural networks.
This paper describes some of these common mathematical areas, including
examples of how they are used in artificial intelligence (AI), particularly in
automated reasoning and natural language processing (NLP). Techniques discussed
include vector spaces, scalar products, subspaces and implication, orthogonal
projection and negation, dual vectors, density matrices, positive operators,
and tensor products. Application areas include information retrieval,
categorization and implication, modelling word-senses and disambiguation,
inference in knowledge bases, and semantic composition.
Some of these approaches can potentially be implemented on quantum hardware.
Many of the practical steps in this implementation are in early stages, and
some are already realized. Explaining some of the common mathematical tools can
help researchers in both AI and quantum computing further exploit these
overlaps, recognizing and exploring new directions along the way.
| [
{
"created": "Tue, 12 Jan 2021 01:35:56 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jan 2021 20:58:51 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Feb 2021 17:36:32 GMT",
"version": "v3"
},
{
"created": "Tue, 14 Sep 2021 16:14:04 GMT",
"version": "v4"
},
{
"created": "Fri, 19 Nov 2021 19:33:01 GMT",
"version": "v5"
},
{
"created": "Thu, 16 Dec 2021 18:16:17 GMT",
"version": "v6"
}
] | 2021-12-17 | [
[
"Widdows",
"Dominic",
""
],
[
"Kitto",
"Kirsty",
""
],
[
"Cohen",
"Trevor",
""
]
] |
2101.04262 | Praveen Abbaraju | Upinder Kaur, Praveen Abbaraju, Harrison McCarty, and Richard M.
Voyles | Clutter Slices Approach for Identification-on-the-fly of Indoor Spaces | First two authors share equal contribution. Presented at ICPR2020 The
25th International Conference on Pattern Recognition, PRAConBE Workshop | 2020 Springer Lecture Notes in Computer Science | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Construction spaces are constantly evolving, dynamic environments in need of
continuous surveying, inspection, and assessment. Traditional manual inspection
of such spaces proves to be an arduous and time-consuming activity. Automation
using robotic agents can be an effective solution. Robots, with perception
capabilities can autonomously classify and survey indoor construction spaces.
In this paper, we present a novel identification-on-the-fly approach for coarse
classification of indoor spaces using the unique signature of clutter. Using
the context granted by clutter, we recognize common indoor spaces such as
corridors, staircases, shared spaces, and restrooms. The proposed clutter
slices pipeline achieves a maximum accuracy of 93.6% on the presented clutter
slices dataset. This sensor independent approach can be generalized to various
domains to equip intelligent autonomous agents in better perceiving their
environment.
| [
{
"created": "Tue, 12 Jan 2021 02:05:33 GMT",
"version": "v1"
}
] | 2021-01-13 | [
[
"Kaur",
"Upinder",
""
],
[
"Abbaraju",
"Praveen",
""
],
[
"McCarty",
"Harrison",
""
],
[
"Voyles",
"Richard M.",
""
]
] |
2101.04355 | Ilias Chalkidis | Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Ion
Androutsopoulos | Neural Contract Element Extraction Revisited: Letters from Sesame Street | 6 pages | updated version of the paper presented at Document Intelligence
Workshop (NeurIPS 2019 Workshop) | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | We investigate contract element extraction. We show that LSTM-based encoders
perform better than dilated CNNs, Transformers, and BERT in this task. We also
find that domain-specific WORD2VEC embeddings outperform generic pre-trained
GLOVE embeddings. Morpho-syntactic features in the form of POS tag and token
shape embeddings, as well as context-aware ELMO embeddings do not improve
performance. Several of these observations contradict choices or findings of
previous work on contract element extraction and generic sequence labeling
tasks, indicating that contract element extraction requires careful
task-specific choices. Analyzing the results of (i) plain TRANSFORMER-based and
(ii) BERT-based models, we find that in the examined task, where the entities
are highly context-sensitive, the lack of recurrency in TRANSFORMERs greatly
affects their performance.
| [
{
"created": "Tue, 12 Jan 2021 09:02:22 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Feb 2021 13:55:41 GMT",
"version": "v2"
}
] | 2021-02-23 | [
[
"Chalkidis",
"Ilias",
""
],
[
"Fergadiotis",
"Manos",
""
],
[
"Malakasiotis",
"Prodromos",
""
],
[
"Androutsopoulos",
"Ion",
""
]
] |
2101.04377 | Jiajia Guo | Jiajia Guo, Chao-Kai Wen, Shi Jin | CAnet: Uplink-aided Downlink Channel Acquisition in FDD Massive MIMO
using Deep Learning | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | IEEE Transactions on Communications 2021 | 10.1109/TCOMM.2021.3120294 | null | cs.IT cs.AI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In frequency-division duplexing systems, the downlink channel state
information (CSI) acquisition scheme leads to high training and feedback
overheads. In this paper, we propose an uplink-aided downlink channel
acquisition framework using deep learning to reduce these overheads. Unlike
most existing works that focus only on channel estimation or feedback modules,
to the best of our knowledge, this is the first study that considers the entire
downlink CSI acquisition process, including downlink pilot design, channel
estimation, and feedback. First, we propose an adaptive pilot design module by
exploiting the correlation in magnitude among bidirectional channels in the
angular domain to improve channel estimation. Next, to avoid the bit allocation
problem during the feedback module, we concatenate the complex channel and
embed the uplink channel magnitude to the channel reconstruction at the base
station. Lastly, we combine the above two modules and compare two popular
downlink channel acquisition frameworks. The former framework estimates and
feeds back the channel at the user equipment subsequently. The user equipment
in the latter one directly feeds back the received pilot signals to the base
station. Our results reveal that, with the help of uplink, directly feeding
back the pilot signals can save approximately 20% of feedback bits, which
provides a guideline for future research.
| [
{
"created": "Tue, 12 Jan 2021 10:12:28 GMT",
"version": "v1"
}
] | 2021-11-23 | [
[
"Guo",
"Jiajia",
""
],
[
"Wen",
"Chao-Kai",
""
],
[
"Jin",
"Shi",
""
]
] |
2101.04378 | Laurent Najman | Jord{\~a}o Bragantini (IC), Alexandre X Falc{\~a}o (IC), Laurent
Najman (LIGM) | Rethinking Interactive Image Segmentation: Feature Space Annotation | null | Pattern Recognition, Elsevier, In press | 10.1016/j.patcog.2022.108882 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the progress of interactive image segmentation methods, high-quality
pixel-level annotation is still time-consuming and laborious - a bottleneck for
several deep learning applications. We take a step back to propose interactive
and simultaneous segment annotation from multiple images guided by feature
space projection. This strategy is in stark contrast to existing interactive
segmentation methodologies, which perform annotation in the image domain. We
show that feature space annotation achieves competitive results with
state-of-the-art methods in foreground segmentation datasets: iCoSeg, DAVIS,
and Rooftop. Moreover, in the semantic segmentation context, it achieves 91.5%
accuracy in the Cityscapes dataset, being 74.75 times faster than the original
annotation procedure. Further, our contribution sheds light on a novel
direction for interactive image annotation that can be integrated with existing
methodologies. The supplementary material presents video demonstrations. Code
available at
https://github.com/LIDS-UNICAMP/rethinking-interactive-image-segmentation.
| [
{
"created": "Tue, 12 Jan 2021 10:13:35 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Dec 2021 10:18:03 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Jul 2022 09:34:07 GMT",
"version": "v3"
}
] | 2022-07-12 | [
[
"Bragantini",
"Jord{ã}o",
"",
"IC"
],
[
"Falc{ã}o",
"Alexandre X",
"",
"IC"
],
[
"Najman",
"Laurent",
"",
"LIGM"
]
] |
2101.04431 | Jorge Beltr\'an | Jorge Beltr\'an, Carlos Guindel, Arturo de la Escalera, Fernando
Garc\'ia | Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups | Published on IEEE Transactions on Intelligent Transportation Systems | IEEE Transactions on Intelligent Transportation Systems, 2022 | 10.1109/TITS.2022.3155228 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most sensor setups for onboard autonomous perception are composed of LiDARs
and vision systems, as they provide complementary information that improves the
reliability of the different algorithms necessary to obtain a robust scene
understanding. However, the effective use of information from different sources
requires an accurate calibration between the sensors involved, which usually
implies a tedious and burdensome process. We present a method to calibrate the
extrinsic parameters of any pair of sensors involving LiDARs, monocular or
stereo cameras, of the same or different modalities. The procedure is composed
of two stages: first, reference points belonging to a custom calibration target
are extracted from the data provided by the sensors to be calibrated, and
second, the optimal rigid transformation is found through the registration of
both point sets. The proposed approach can handle devices with very different
resolutions and poses, as usually found in vehicle setups. In order to assess
the performance of the proposed method, a novel evaluation suite built on top
of a popular simulation framework is introduced. Experiments on the synthetic
environment show that our calibration algorithm significantly outperforms
existing methods, whereas real data tests corroborate the results obtained in
the evaluation suite. Open-source code is available at
https://github.com/beltransen/velo2cam_calibration
| [
{
"created": "Tue, 12 Jan 2021 12:02:26 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Mar 2022 16:10:22 GMT",
"version": "v2"
}
] | 2022-03-16 | [
[
"Beltrán",
"Jorge",
""
],
[
"Guindel",
"Carlos",
""
],
[
"de la Escalera",
"Arturo",
""
],
[
"García",
"Fernando",
""
]
] |
2101.04493 | Kseniya Cherenkova | Kseniya Cherenkova, Djamila Aouada, Gleb Gusev | PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in
3D | 2020 IEEE International Conference on Image Processing (ICIP) | 2020 IEEE International Conference on Image Processing (ICIP),
2020, pp. 2741-2745 | 10.1109/ICIP40778.2020.9191095 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data
autoencoder. To demonstrate its efficiency we learn to synthesize
high-resolution point clouds of 10k points that densely describe the underlying
geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as
protrusions, missing parts, smoothed edges and holes, inevitably appear in real
3D scans of fabricated CAD objects. Learning the original CAD model
construction from a 3D scan requires a ground truth to be available together
with the corresponding 3D scan of an object. To solve the gap, we introduce a
new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their
corresponding 3D meshes. This dataset is used to learn a convolutional
autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
The challenges of this new dataset are demonstrated in comparison with other
generative point cloud sampling models trained on ShapeNet. The CC3D
autoencoder is efficient with respect to memory consumption and training time
as compared to stateof-the-art models for 3D data generation.
| [
{
"created": "Tue, 12 Jan 2021 14:14:13 GMT",
"version": "v1"
}
] | 2021-01-13 | [
[
"Cherenkova",
"Kseniya",
""
],
[
"Aouada",
"Djamila",
""
],
[
"Gusev",
"Gleb",
""
]
] |
2101.04520 | Morteza Haghir Chehreghani | Victor Eberstein, Jonas Sj\"oblom, Nikolce Murgovski, Morteza Haghir
Chehreghani | A Unified Framework for Online Trip Destination Prediction | This work is published by Springer, Machine Learning | Machine Learning, 111, 3839-3865, 2022 | 10.1007/s10994-022-06175-y | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trip destination prediction is an area of increasing importance in many
applications such as trip planning, autonomous driving and electric vehicles.
Even though this problem could be naturally addressed in an online learning
paradigm where data is arriving in a sequential fashion, the majority of
research has rather considered the offline setting. In this paper, we present a
unified framework for trip destination prediction in an online setting, which
is suitable for both online training and online prediction. For this purpose,
we develop two clustering algorithms and integrate them within two online
prediction models for this problem.
We investigate the different configurations of clustering algorithms and
prediction models on a real-world dataset. We demonstrate that both the
clustering and the entire framework yield consistent results compared to the
offline setting. Finally, we propose a novel regret metric for evaluating the
entire online framework in comparison to its offline counterpart. This metric
makes it possible to relate the source of erroneous predictions to either the
clustering or the prediction model. Using this metric, we show that the
proposed methods converge to a probability distribution resembling the true
underlying distribution with a lower regret than all of the baselines.
| [
{
"created": "Tue, 12 Jan 2021 14:45:27 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Dec 2022 11:56:18 GMT",
"version": "v2"
}
] | 2023-01-02 | [
[
"Eberstein",
"Victor",
""
],
[
"Sjöblom",
"Jonas",
""
],
[
"Murgovski",
"Nikolce",
""
],
[
"Chehreghani",
"Morteza Haghir",
""
]
] |
2101.04640 | Filip Ilievski | Filip Ilievski, Alessandro Oltramari, Kaixin Ma, Bin Zhang, Deborah L.
McGuinness, Pedro Szekely | Dimensions of Commonsense Knowledge | null | Knowledge-Based Systems 2021 | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Commonsense knowledge is essential for many AI applications, including those
in natural language processing, visual processing, and planning. Consequently,
many sources that include commonsense knowledge have been designed and
constructed over the past decades. Recently, the focus has been on large
text-based sources, which facilitate easier integration with neural (language)
models and application to textual tasks, typically at the expense of the
semantics of the sources and their harmonization. Efforts to consolidate
commonsense knowledge have yielded partial success, with no clear path towards
a comprehensive solution. We aim to organize these sources around a common set
of dimensions of commonsense knowledge. We survey a wide range of popular
commonsense sources with a special focus on their relations. We consolidate
these relations into 13 knowledge dimensions. This consolidation allows us to
unify the separate sources and to compute indications of their coverage,
overlap, and gaps with respect to the knowledge dimensions. Moreover, we
analyze the impact of each dimension on downstream reasoning tasks that require
commonsense knowledge, observing that the temporal and desire/goal dimensions
are very beneficial for reasoning on current downstream tasks, while
distinctness and lexical knowledge have little impact. These results reveal
preferences for some dimensions in current evaluation, and potential neglect of
others.
| [
{
"created": "Tue, 12 Jan 2021 17:52:39 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jul 2021 06:28:37 GMT",
"version": "v2"
}
] | 2021-07-30 | [
[
"Ilievski",
"Filip",
""
],
[
"Oltramari",
"Alessandro",
""
],
[
"Ma",
"Kaixin",
""
],
[
"Zhang",
"Bin",
""
],
[
"McGuinness",
"Deborah L.",
""
],
[
"Szekely",
"Pedro",
""
]
] |
2101.04727 | Hossein Rajaby Faghihi | Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal, and
Parisa Kordjamshidi | Latent Alignment of Procedural Concepts in Multimodal Recipes | Published in ALVR 2020, a workshop in ACL 2020 | Proceedings of the First Workshop on Advances in Language and
Vision Research 2020 (26-31) | 10.18653/v1/2020.alvr-1.5 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel alignment mechanism to deal with procedural reasoning on a
newly released multimodal QA dataset, named RecipeQA. Our model is solving the
textual cloze task which is a reading comprehension on a recipe containing
images and instructions. We exploit the power of attention networks,
cross-modal representations, and a latent alignment space between instructions
and candidate answers to solve the problem. We introduce constrained
max-pooling which refines the max-pooling operation on the alignment matrix to
impose disjoint constraints among the outputs of the model. Our evaluation
result indicates a 19\% improvement over the baselines.
| [
{
"created": "Tue, 12 Jan 2021 19:55:53 GMT",
"version": "v1"
}
] | 2021-01-14 | [
[
"Faghihi",
"Hossein Rajaby",
""
],
[
"Mirzaee",
"Roshanak",
""
],
[
"Paliwal",
"Sudarshan",
""
],
[
"Kordjamshidi",
"Parisa",
""
]
] |
2101.04792 | Nikolay Mikhaylovskiy | Roman Vygon, Nikolay Mikhaylovskiy | Learning Efficient Representations for Keyword Spotting with Triplet
Loss | Submitted to SPECOM 2021 | In: Karpov A., Potapova R. (eds) Speech and Computer. SPECOM 2021.
Lecture Notes in Computer Science, vol 12997. Springer, Cham | 10.1007/978-3-030-87802-3_69 | null | eess.AS cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past few years, triplet loss-based metric embeddings have become a
de-facto standard for several important computer vision problems, most
no-tably, person reidentification. On the other hand, in the area of speech
recognition the metric embeddings generated by the triplet loss are rarely used
even for classification problems. We fill this gap showing that a combination
of two representation learning techniques: a triplet loss-based embedding and a
variant of kNN for classification instead of cross-entropy loss significantly
(by 26% to 38%) improves the classification accuracy for convolutional networks
on a LibriSpeech-derived LibriWords datasets. To do so, we propose a novel
phonetic similarity based triplet mining approach. We also improve the current
best published SOTA for Google Speech Commands dataset V1 10+2 -class
classification by about 34%, achieving 98.55% accuracy, V2 10+2-class
classification by about 20%, achieving 98.37% accuracy, and V2 35-class
classification by over 50%, achieving 97.0% accuracy.
| [
{
"created": "Tue, 12 Jan 2021 22:55:17 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 16:48:16 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Apr 2021 21:11:36 GMT",
"version": "v3"
},
{
"created": "Fri, 4 Jun 2021 22:20:46 GMT",
"version": "v4"
}
] | 2022-02-08 | [
[
"Vygon",
"Roman",
""
],
[
"Mikhaylovskiy",
"Nikolay",
""
]
] |
2101.04804 | Beatriz Asfora | Beatriz Arruda Asfora | Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot | null | 23rd ABCM International Congress of Mechanical
Engineering,December 6-11, 2015, Rio de Janeiro, RJ, Brazil | 10.20906/CPS/COB-2015-1649 | null | cs.RO cs.CV cs.SY eess.IV eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Robotics can be defined as the connection of perception to action. Taking
this further, this project aims to drive a robot using an automated computer
vision embedded system, connecting the robot's vision to its behavior. In order
to implement a color recognition system on the robot, open source tools are
chosen, such as Processing language, Android system, Arduino platform and Pixy
camera. The constraints are clear: simplicity, replicability and financial
viability. In order to integrate Robotics, Computer Vision and Image
Processing, the robot is applied on a typical mobile robot's issue: line
following. The problem of distinguishing the path from the background is
analyzed through different approaches: the popular Otsu's Method, thresholding
based on color combinations through experimentation and color tracking via hue
and saturation. Decision making of where to move next is based on the line
center of the path and is fully automated. Using a four-legged robot as
platform and a camera as its only sensor, the robot is capable of successfully
follow a line. From capturing the image to moving the robot, it's evident how
integrative Robotics can be. The issue of this paper alone involves knowledge
of Mechanical Engineering, Electronics, Control Systems and Programming.
Everything related to this work was documented and made available on an open
source online page, so it can be useful in learning and experimenting with
robotics.
| [
{
"created": "Tue, 12 Jan 2021 23:52:53 GMT",
"version": "v1"
}
] | 2021-01-14 | [
[
"Asfora",
"Beatriz Arruda",
""
]
] |
2101.04869 | Shengli Jiang | Shengli Jiang and Victor M. Zavala | Convolutional Neural Nets in Chemical Engineering: Foundations,
Computations, and Applications | null | AIChE J. 2021; e17282 | 10.1002/aic.17282 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we review the mathematical foundations of convolutional neural
nets (CNNs) with the goals of: i) highlighting connections with techniques from
statistics, signal processing, linear algebra, differential equations, and
optimization, ii) demystifying underlying computations, and iii) identifying
new types of applications. CNNs are powerful machine learning models that
highlight features from grid data to make predictions (regression and
classification). The grid data object can be represented as vectors (in 1D),
matrices (in 2D), or tensors (in 3D or higher dimensions) and can incorporate
multiple channels (thus providing high flexibility in the input data
representation). CNNs highlight features from the grid data by performing
convolution operations with different types of operators. The operators
highlight different types of features (e.g., patterns, gradients, geometrical
features) and are learned by using optimization techniques. In other words,
CNNs seek to identify optimal operators that best map the input data to the
output data. A common misconception is that CNNs are only capable of processing
image or video data but their application scope is much wider; specifically,
datasets encountered in diverse applications can be expressed as grid data.
Here, we show how to apply CNNs to new types of applications such as optimal
control, flow cytometry, multivariate process monitoring, and molecular
simulations.
| [
{
"created": "Wed, 13 Jan 2021 04:20:42 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2021 14:06:33 GMT",
"version": "v2"
}
] | 2021-07-08 | [
[
"Jiang",
"Shengli",
""
],
[
"Zavala",
"Victor M.",
""
]
] |
2101.04904 | Ali Ayub | Ali Ayub, Alan R. Wagner | EEC: Learning to Encode and Regenerate Images for Continual Learning | Added link to the code in the paper. A preliminary version of this
work was presented at ICML 2020 Workshop on Lifelong Machine Learning:
arXiv:2007.06637 | International Conference on Learning Representations (ICLR) 2021 | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The two main impediments to continual learning are catastrophic forgetting
and memory limitations on the storage of data. To cope with these challenges,
we propose a novel, cognitively-inspired approach which trains autoencoders
with Neural Style Transfer to encode and store images. During training on a new
task, reconstructed images from encoded episodes are replayed in order to avoid
catastrophic forgetting. The loss function for the reconstructed images is
weighted to reduce its effect during classifier training to cope with image
degradation. When the system runs out of memory the encoded episodes are
converted into centroids and covariance matrices, which are used to generate
pseudo-images during classifier training, keeping classifier performance stable
while using less memory. Our approach increases classification accuracy by
13-17% over state-of-the-art methods on benchmark datasets, while requiring 78%
less storage space.
| [
{
"created": "Wed, 13 Jan 2021 06:43:10 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jan 2021 09:16:24 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Apr 2021 05:05:05 GMT",
"version": "v3"
},
{
"created": "Sun, 2 May 2021 05:45:03 GMT",
"version": "v4"
}
] | 2021-05-04 | [
[
"Ayub",
"Ali",
""
],
[
"Wagner",
"Alan R.",
""
]
] |
2101.04924 | Yu Wu | Yu Wu, Linchao Zhu, Xiaohan Wang, Yi Yang, Fei Wu | Learning to Anticipate Egocentric Actions by Imagination | Accepted to IEEE Transactions on Image Processing (TIP) | IEEE Transactions on Image Processing, vol. 30, pp. 1143-1152,
2021 | 10.1109/TIP.2020.3040521 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anticipating actions before they are executed is crucial for a wide range of
practical applications, including autonomous driving and robotics. In this
paper, we study the egocentric action anticipation task, which predicts future
action seconds before it is performed for egocentric videos. Previous
approaches focus on summarizing the observed content and directly predicting
future action based on past observations. We believe it would benefit the
action anticipation if we could mine some cues to compensate for the missing
information of the unobserved frames. We then propose to decompose the action
anticipation into a series of future feature predictions. We imagine how the
visual feature changes in the near future and then predicts future action
labels based on these imagined representations. Differently, our ImagineRNN is
optimized in a contrastive learning way instead of feature regression. We
utilize a proxy task to train the ImagineRNN, i.e., selecting the correct
future states from distractors. We further improve ImagineRNN by residual
anticipation, i.e., changing its target to predicting the feature difference of
adjacent frames instead of the frame content. This promotes the network to
focus on our target, i.e., the future action, as the difference between
adjacent frame features is more important for forecasting the future. Extensive
experiments on two large-scale egocentric action datasets validate the
effectiveness of our method. Our method significantly outperforms previous
methods on both the seen test set and the unseen test set of the EPIC Kitchens
Action Anticipation Challenge.
| [
{
"created": "Wed, 13 Jan 2021 08:04:10 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jan 2021 11:02:10 GMT",
"version": "v2"
}
] | 2021-01-20 | [
[
"Wu",
"Yu",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Yang",
"Yi",
""
],
[
"Wu",
"Fei",
""
]
] |
2101.04954 | Dazhen Deng | Dazhen Deng, Jiang Wu, Jiachen Wang, Yihong Wu, Xiao Xie, Zheng Zhou,
Hui Zhang, Xiaolong Zhang, Yingcai Wu | EventAnchor: Reducing Human Interactions in Event Annotation of Racket
Sports Videos | null | Proceedings of the 2021 CHI Conference on Human Factors in
Computing Systems | 10.1145/3411764.3445431 | null | cs.HC cs.CV | http://creativecommons.org/licenses/by/4.0/ | The popularity of racket sports (e.g., tennis and table tennis) leads to high
demands for data analysis, such as notational analysis, on player performance.
While sports videos offer many benefits for such analysis, retrieving accurate
information from sports videos could be challenging. In this paper, we propose
EventAnchor, a data analysis framework to facilitate interactive annotation of
racket sports video with the support of computer vision algorithms. Our
approach uses machine learning models in computer vision to help users acquire
essential events from videos (e.g., serve, the ball bouncing on the court) and
offers users a set of interactive tools for data annotation. An evaluation
study on a table tennis annotation system built on this framework shows
significant improvement of user performances in simple annotation tasks on
objects of interest and complex annotation tasks requiring domain knowledge.
| [
{
"created": "Wed, 13 Jan 2021 09:32:05 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jan 2021 03:10:54 GMT",
"version": "v2"
}
] | 2021-05-21 | [
[
"Deng",
"Dazhen",
""
],
[
"Wu",
"Jiang",
""
],
[
"Wang",
"Jiachen",
""
],
[
"Wu",
"Yihong",
""
],
[
"Xie",
"Xiao",
""
],
[
"Zhou",
"Zheng",
""
],
[
"Zhang",
"Hui",
""
],
[
"Zhang",
"Xiaolong",
""
],
[
"Wu",
"Yingcai",
""
]
] |
2101.05018 | Xinggang Wang | Mengting Chen and Xinggang Wang and Heng Luo and Yifeng Geng and Wenyu
Liu | Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition | 14 pages | SCIENCE CHINA Information Sciences, 2021 | 10.1007/s11432-020-2973-7 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep networks can learn to accurately recognize objects of a category by
training on a large number of annotated images. However, a meta-learning
challenge known as a low-shot image recognition task comes when only a few
images with annotations are available for learning a recognition model for one
category. The objects in testing/query and training/support images are likely
to be different in size, location, style, and so on. Our method, called
Cascaded Feature Matching Network (CFMN), is proposed to solve this problem. We
train the meta-learner to learn a more fine-grained and adaptive deep distance
metric by focusing more on the features that have high correlations between
compared images by the feature matching block which can align associated
features together and naturally ignore those non-discriminative features. By
applying the proposed feature matching block in different layers of the
few-shot recognition network, multi-scale information among the compared images
can be incorporated into the final cascaded matching feature, which boosts the
recognition performance further and generalizes better by learning on
relationships. The experiments for few-shot learning on two standard datasets,
\emph{mini}ImageNet and Omniglot, have confirmed the effectiveness of our
method. Besides, the multi-label few-shot task is first studied on a new data
split of COCO which further shows the superiority of the proposed feature
matching network when performing few-shot learning in complex images. The code
will be made publicly available.
| [
{
"created": "Wed, 13 Jan 2021 11:37:28 GMT",
"version": "v1"
}
] | 2021-01-14 | [
[
"Chen",
"Mengting",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Luo",
"Heng",
""
],
[
"Geng",
"Yifeng",
""
],
[
"Liu",
"Wenyu",
""
]
] |
2101.05050 | Stassa Patsantzis | Stassa Patsantzis, Stephen H. Muggleton | Top Program Construction and Reduction for polynomial time
Meta-Interpretive Learning | 25 pages, 3 figures, to be published in Machine Learning Journal
Special Issue on Learning and Reasoning | Mach.Learn. 100, 755-778 (2021) | 10.1007/s10994-020-05945-w | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Meta-Interpretive Learners, like most ILP systems, learn by searching for a
correct hypothesis in the hypothesis space, the powerset of all constructible
clauses. We show how this exponentially-growing search can be replaced by the
construction of a Top program: the set of clauses in all correct hypotheses
that is itself a correct hypothesis. We give an algorithm for Top program
construction and show that it constructs a correct Top program in polynomial
time and from a finite number of examples. We implement our algorithm in Prolog
as the basis of a new MIL system, Louise, that constructs a Top program and
then reduces it by removing redundant clauses. We compare Louise to the
state-of-the-art search-based MIL system Metagol in experiments on grid world
navigation, graph connectedness and grammar learning datasets and find that
Louise improves on Metagol's predictive accuracy when the hypothesis space and
the target theory are both large, or when the hypothesis space does not include
a correct hypothesis because of "classification noise" in the form of
mislabelled examples. When the hypothesis space or the target theory are small,
Louise and Metagol perform equally well.
| [
{
"created": "Wed, 13 Jan 2021 13:39:21 GMT",
"version": "v1"
}
] | 2021-09-14 | [
[
"Patsantzis",
"Stassa",
""
],
[
"Muggleton",
"Stephen H.",
""
]
] |
2101.05107 | Benjamin Congram | Benjamin Congram and Timothy D. Barfoot | Relatively Lazy: Indoor-Outdoor Navigation Using Vision and GNSS | Presented at CRV2021 | In Proceedings of the 18th Conference on Robots and Vision (CRV),
pages 25-32. Burnaby, British Columbia, 26-28 May 2021 | 10.1109/CRV52889.2021.00015 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Teach and Repeat has shown relative navigation is a robust and
efficient solution for autonomous vision-based path following in difficult
environments. Adding additional absolute sensors such as Global Navigation
Satellite Systems (GNSS) has the potential to expand the domain of Visual Teach
and Repeat to environments where the ability to visually localize is not
guaranteed. Our method of lazy mapping and delaying estimation until a
path-tracking error is needed avoids the need to estimate absolute states. As a
result, map optimization is not required and paths can be driven immediately
after being taught. We validate our approach on a real robot through an
experiment in a joint indoor-outdoor environment comprising 3.5km of autonomous
route repeating across a variety of lighting conditions. We achieve smooth
error signals throughout the runs despite large sections of dropout for each
sensor.
| [
{
"created": "Wed, 13 Jan 2021 14:43:45 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Jul 2021 19:47:18 GMT",
"version": "v2"
}
] | 2021-07-20 | [
[
"Congram",
"Benjamin",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] |
2101.05108 | Thea Aarrestad | Thea Aarrestad, Vladimir Loncar, Nicol\`o Ghielmetti, Maurizio
Pierini, Sioni Summers, Jennifer Ngadiuba, Christoffer Petersson, Hampus
Linander, Yutaro Iiyama, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris,
Dylan Rankin, Sergo Jindariani, Kevin Pedro, Nhan Tran, Mia Liu, Edward
Kreinar, Zhenbin Wu, and Duc Hoang | Fast convolutional neural networks on FPGAs with hls4ml | 18 pages, 18 figures, 4 tables | Mach. Learn.: Sci. Technol. 2 045015 (2021) | 10.1088/2632-2153/ac0ea1 | null | cs.LG cs.CV hep-ex physics.ins-det stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an automated tool for deploying ultra low-latency, low-power
deep neural networks with convolutional layers on FPGAs. By extending the
hls4ml library, we demonstrate an inference latency of $5\,\mu$s using
convolutional architectures, targeting microsecond latency applications like
those at the CERN Large Hadron Collider. Considering benchmark models trained
on the Street View House Numbers Dataset, we demonstrate various methods for
model compression in order to fit the computational constraints of a typical
FPGA device used in trigger and data acquisition systems of particle detectors.
In particular, we discuss pruning and quantization-aware training, and
demonstrate how resource utilization can be significantly reduced with little
to no loss in model accuracy. We show that the FPGA critical resource
consumption can be reduced by 97% with zero loss in model accuracy, and by 99%
when tolerating a 6% accuracy degradation.
| [
{
"created": "Wed, 13 Jan 2021 14:47:11 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Apr 2021 11:30:02 GMT",
"version": "v2"
}
] | 2021-07-19 | [
[
"Aarrestad",
"Thea",
""
],
[
"Loncar",
"Vladimir",
""
],
[
"Ghielmetti",
"Nicolò",
""
],
[
"Pierini",
"Maurizio",
""
],
[
"Summers",
"Sioni",
""
],
[
"Ngadiuba",
"Jennifer",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Linander",
"Hampus",
""
],
[
"Iiyama",
"Yutaro",
""
],
[
"Di Guglielmo",
"Giuseppe",
""
],
[
"Duarte",
"Javier",
""
],
[
"Harris",
"Philip",
""
],
[
"Rankin",
"Dylan",
""
],
[
"Jindariani",
"Sergo",
""
],
[
"Pedro",
"Kevin",
""
],
[
"Tran",
"Nhan",
""
],
[
"Liu",
"Mia",
""
],
[
"Kreinar",
"Edward",
""
],
[
"Wu",
"Zhenbin",
""
],
[
"Hoang",
"Duc",
""
]
] |
2101.05181 | Lina Mezghani | Lina Mezghani, Sainbayar Sukhbaatar, Thibaut Lavril, Oleksandr
Maksymets, Dhruv Batra, Piotr Bojanowski, Karteek Alahari | Memory-Augmented Reinforcement Learning for Image-Goal Navigation | null | IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) 2022 | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a memory-augmented approach for image-goal
navigation. Earlier attempts, including RL-based and SLAM-based approaches have
either shown poor generalization performance, or are heavily-reliant on
pose/depth sensors. Our method is based on an attention-based end-to-end model
that leverages an episodic memory to learn to navigate. First, we train a
state-embedding network in a self-supervised fashion, and then use it to embed
previously-visited states into the agent's memory. Our navigation policy takes
advantage of this information through an attention mechanism. We validate our
approach with extensive evaluations, and show that our model establishes a new
state of the art on the challenging Gibson dataset. Furthermore, we achieve
this impressive performance from RGB input alone, without access to additional
information such as position or depth, in stark contrast to related work.
| [
{
"created": "Wed, 13 Jan 2021 16:30:20 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Apr 2021 13:02:39 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Aug 2021 10:00:11 GMT",
"version": "v3"
},
{
"created": "Mon, 28 Feb 2022 15:38:39 GMT",
"version": "v4"
},
{
"created": "Mon, 12 Sep 2022 12:19:52 GMT",
"version": "v5"
}
] | 2023-01-06 | [
[
"Mezghani",
"Lina",
""
],
[
"Sukhbaatar",
"Sainbayar",
""
],
[
"Lavril",
"Thibaut",
""
],
[
"Maksymets",
"Oleksandr",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Bojanowski",
"Piotr",
""
],
[
"Alahari",
"Karteek",
""
]
] |
2101.05231 | Keaton Hamm | HanQin Cai, Keaton Hamm, Longxiu Huang, Deanna Needell | Robust CUR Decomposition: Theory and Imaging Applications | null | SIAM Journal on Imaging Sciences 14.4 (2021): 1472-1503 | 10.1137/20M1388322 | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the use of Robust PCA in a CUR decomposition framework
and applications thereof. Our main algorithms produce a robust version of
column-row factorizations of matrices $\mathbf{D}=\mathbf{L}+\mathbf{S}$ where
$\mathbf{L}$ is low-rank and $\mathbf{S}$ contains sparse outliers. These
methods yield interpretable factorizations at low computational cost, and
provide new CUR decompositions that are robust to sparse outliers, in contrast
to previous methods. We consider two key imaging applications of Robust PCA:
video foreground-background separation and face modeling. This paper examines
the qualitative behavior of our Robust CUR decompositions on the benchmark
videos and face datasets, and find that our method works as well as standard
Robust PCA while being significantly faster. Additionally, we consider hybrid
randomized and deterministic sampling methods which produce a compact CUR
decomposition of a given matrix, and apply this to video sequences to produce
canonical frames thereof.
| [
{
"created": "Tue, 5 Jan 2021 17:58:15 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Aug 2021 16:33:03 GMT",
"version": "v2"
}
] | 2023-02-28 | [
[
"Cai",
"HanQin",
""
],
[
"Hamm",
"Keaton",
""
],
[
"Huang",
"Longxiu",
""
],
[
"Needell",
"Deanna",
""
]
] |
2101.05339 | Tian Xie | Tian Xie, Arthur France-Lanord, Yanming Wang, Jeffrey Lopez, Michael
Austin Stolberg, Megan Hill, Graham Michael Leverick, Rafael
Gomez-Bombarelli, Jeremiah A. Johnson, Yang Shao-Horn, Jeffrey C. Grossman | Accelerating amorphous polymer electrolyte screening by learning to
reduce errors in molecular dynamics simulated properties | 29 pages, 6 figures + supplementary information | Nature communications 13.1 (2022): 1-10 | 10.1038/s41467-022-30994-1 | null | cond-mat.mtrl-sci cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Polymer electrolytes are promising candidates for the next generation
lithium-ion battery technology. Large scale screening of polymer electrolytes
is hindered by the significant cost of molecular dynamics (MD) simulation in
amorphous systems: the amorphous structure of polymers requires multiple,
repeated sampling to reduce noise and the slow relaxation requires long
simulation time for convergence. Here, we accelerate the screening with a
multi-task graph neural network that learns from a large amount of noisy,
unconverged, short MD data and a small number of converged, long MD data. We
achieve accurate predictions of 4 different converged properties and screen a
space of 6247 polymers that is orders of magnitude larger than previous
computational studies. Further, we extract several design principles for
polymer electrolytes and provide an open dataset for the community. Our
approach could be applicable to a broad class of material discovery problems
that involve the simulation of complex, amorphous materials.
| [
{
"created": "Wed, 13 Jan 2021 20:46:24 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Mar 2022 23:50:28 GMT",
"version": "v2"
}
] | 2022-07-05 | [
[
"Xie",
"Tian",
""
],
[
"France-Lanord",
"Arthur",
""
],
[
"Wang",
"Yanming",
""
],
[
"Lopez",
"Jeffrey",
""
],
[
"Stolberg",
"Michael Austin",
""
],
[
"Hill",
"Megan",
""
],
[
"Leverick",
"Graham Michael",
""
],
[
"Gomez-Bombarelli",
"Rafael",
""
],
[
"Johnson",
"Jeremiah A.",
""
],
[
"Shao-Horn",
"Yang",
""
],
[
"Grossman",
"Jeffrey C.",
""
]
] |
2101.05404 | Justyna P. Zwolak | Shangjie Guo, Amilson R. Fritsch, Craig Greenberg, I. B. Spielman,
Justyna P. Zwolak | Machine-learning enhanced dark soliton detection in Bose-Einstein
condensates | 17 pages, 5 figures | Mach. Learn.: Sci. Technol. 2: 035020 (2021) | 10.1088/2632-2153/abed1e | null | cond-mat.quant-gas cs.CV cs.LG quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most data in cold-atom experiments comes from images, the analysis of which
is limited by our preconceptions of the patterns that could be present in the
data. We focus on the well-defined case of detecting dark solitons -- appearing
as local density depletions in a Bose-Einstein condensate (BEC) -- using a
methodology that is extensible to the general task of pattern recognition in
images of cold atoms. Studying soliton dynamics over a wide range of parameters
requires the analysis of large datasets, making the existing
human-inspection-based methodology a significant bottleneck. Here we describe
an automated classification and positioning system for identifying localized
excitations in atomic BECs utilizing deep convolutional neural networks to
eliminate the need for human image examination. Furthermore, we openly publish
our labeled dataset of dark solitons, the first of its kind, for further
machine learning research.
| [
{
"created": "Thu, 14 Jan 2021 00:44:56 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Jun 2021 17:41:14 GMT",
"version": "v2"
}
] | 2021-06-18 | [
[
"Guo",
"Shangjie",
""
],
[
"Fritsch",
"Amilson R.",
""
],
[
"Greenberg",
"Craig",
""
],
[
"Spielman",
"I. B.",
""
],
[
"Zwolak",
"Justyna P.",
""
]
] |
2101.05418 | EPTCS | Luc Jaulin (Robex, Lab-STICC), Beno\^it Desrochers (DGA-TN) | Enclosing the Sliding Surfaces of a Controlled Swing | In Proceedings SNR 2020, arXiv:2101.05256 | EPTCS 331, 2021, pp. 43-55 | 10.4204/EPTCS.331.4 | null | cs.RO cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When implementing a non-continuous controller for a cyber-physical system, it
may happen that the evolution of the closed-loop system is not anymore
piecewise differentiable along the trajectory, mainly due to conditional
statements inside the controller. This may lead to some unwanted chattering
effects than may damage the system. This behavior is difficult to observe even
in simulation. In this paper, we propose an interval approach to characterize
the sliding surface which corresponds to the set of all states such that the
state trajectory may jump indefinitely between two distinct behaviors. We show
that the recent notion of thick sets will allows us to compute efficiently an
outer approximation of the sliding surface of a given class of hybrid system
taking into account all set-membership uncertainties. An application to the
verification of the controller of a child swing is considered to illustrate the
principle of the approach.
| [
{
"created": "Thu, 14 Jan 2021 01:58:15 GMT",
"version": "v1"
}
] | 2021-01-15 | [
[
"Jaulin",
"Luc",
"",
"Robex, Lab-STICC"
],
[
"Desrochers",
"Benoît",
"",
"DGA-TN"
]
] |
2101.05439 | Xiaofeng Liu | Xiaofeng Liu, Fangxu Xing, Jerry L. Prince, Aaron Carass, Maureen
Stone, Georges El Fakhri, Jonghye Woo | Dual-cycle Constrained Bijective VAE-GAN For Tagged-to-Cine Magnetic
Resonance Image Synthesis | Accepted to IEEE International Symposium on Biomedical Imaging (ISBI)
2021 | 2021 IEEE 18th International Symposium on Biomedical Imaging
(ISBI) | 10.1109/ISBI48211.2021.9433852 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Tagged magnetic resonance imaging (MRI) is a widely used imaging technique
for measuring tissue deformation in moving organs. Due to tagged MRI's
intrinsic low anatomical resolution, another matching set of cine MRI with
higher resolution is sometimes acquired in the same scanning session to
facilitate tissue segmentation, thus adding extra time and cost. To mitigate
this, in this work, we propose a novel dual-cycle constrained bijective VAE-GAN
approach to carry out tagged-to-cine MR image synthesis. Our method is based on
a variational autoencoder backbone with cycle reconstruction constrained
adversarial training to yield accurate and realistic cine MR images given
tagged MR images. Our framework has been trained, validated, and tested using
1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI
from twenty healthy subjects, respectively, demonstrating superior performance
over the comparison methods. Our method can potentially be used to reduce the
extra acquisition time and cost, while maintaining the same workflow for
further motion analyses.
| [
{
"created": "Thu, 14 Jan 2021 03:27:16 GMT",
"version": "v1"
}
] | 2021-06-08 | [
[
"Liu",
"Xiaofeng",
""
],
[
"Xing",
"Fangxu",
""
],
[
"Prince",
"Jerry L.",
""
],
[
"Carass",
"Aaron",
""
],
[
"Stone",
"Maureen",
""
],
[
"Fakhri",
"Georges El",
""
],
[
"Woo",
"Jonghye",
""
]
] |
2101.05570 | Aythami Morales | Alejandro Acien and Aythami Morales and John V. Monaco and Ruben
Vera-Rodriguez and Julian Fierrez | TypeNet: Deep Learning Keystroke Biometrics | arXiv admin note: substantial text overlap with arXiv:2004.03627 | IEEE Transactions on Biometrics, Behavior, and Identity Science,
2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We study the performance of Long Short-Term Memory networks for keystroke
biometric authentication at large scale in free-text scenarios. For this we
explore the performance of Long Short-Term Memory (LSTMs) networks trained with
a moderate number of keystrokes per identity and evaluated under different
scenarios including: i) three learning approaches depending on the loss
function (softmax, contrastive, and triplet loss); ii) different number of
training samples and lengths of keystroke sequences; iii) four databases based
on two device types (physical vs touchscreen keyboard); and iv) comparison with
existing approaches based on both traditional statistical methods and deep
learning architectures. Our approach called TypeNet achieves state-of-the-art
keystroke biometric authentication performance with an Equal Error Rate of 2.2%
and 9.2% for physical and touchscreen keyboards, respectively, significantly
outperforming previous approaches. Our experiments demonstrate a moderate
increase in error with up to 100,000 subjects, demonstrating the potential of
TypeNet to operate at an Internet scale. To the best of our knowledge, the
databases used in this work are the largest existing free-text keystroke
databases available for research with more than 136 million keystrokes from
168,000 subjects in physical keyboards, and 60,000 subjects with more than 63
million keystrokes acquired on mobile touchscreens.
| [
{
"created": "Thu, 14 Jan 2021 12:49:09 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2021 17:40:57 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Sep 2021 07:00:16 GMT",
"version": "v3"
}
] | 2021-09-14 | [
[
"Acien",
"Alejandro",
""
],
[
"Morales",
"Aythami",
""
],
[
"Monaco",
"John V.",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Fierrez",
"Julian",
""
]
] |
2101.05593 | Renato Stoffalette Joao | Renato Stoffalette Joao | On the Temporality of Priors in Entity Linking | null | 2020 European Conference on Information Retrieval | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Entity linking is a fundamental task in natural language processing which
deals with the lexical ambiguity in texts. An important component in entity
linking approaches is the mention-to-entity prior probability. Even though
there is a large number of works in entity linking, the existing approaches do
not explicitly consider the time aspect, specifically the temporality of an
entity's prior probability. We posit that this prior probability is temporal in
nature and affects the performance of entity linking systems. In this paper we
systematically study the effect of the prior on the entity linking performance
over the temporal validity of both texts and KBs.
| [
{
"created": "Thu, 14 Jan 2021 13:58:31 GMT",
"version": "v1"
}
] | 2021-01-15 | [
[
"Joao",
"Renato Stoffalette",
""
]
] |
2101.05779 | Giovanni Paolini | Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro
Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, Stefano
Soatto | Structured Prediction as Translation between Augmented Natural Languages | null | International Conference on Learning Representations (ICLR) 2021 | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new framework, Translation between Augmented Natural Languages
(TANL), to solve many structured prediction language tasks including joint
entity and relation extraction, nested named entity recognition, relation
classification, semantic role labeling, event extraction, coreference
resolution, and dialogue state tracking. Instead of tackling the problem by
training task-specific discriminative classifiers, we frame it as a translation
task between augmented natural languages, from which the task-relevant
information can be easily extracted. Our approach can match or outperform
task-specific models on all tasks, and in particular, achieves new
state-of-the-art results on joint entity and relation extraction (CoNLL04, ADE,
NYT, and ACE2005 datasets), relation classification (FewRel and TACRED), and
semantic role labeling (CoNLL-2005 and CoNLL-2012). We accomplish this while
using the same architecture and hyperparameters for all tasks and even when
training a single model to solve all tasks at the same time (multi-task
learning). Finally, we show that our framework can also significantly improve
the performance in a low-resource regime, thanks to better use of label
semantics.
| [
{
"created": "Thu, 14 Jan 2021 18:32:21 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 22:08:48 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Dec 2021 19:55:57 GMT",
"version": "v3"
}
] | 2021-12-06 | [
[
"Paolini",
"Giovanni",
""
],
[
"Athiwaratkun",
"Ben",
""
],
[
"Krone",
"Jason",
""
],
[
"Ma",
"Jie",
""
],
[
"Achille",
"Alessandro",
""
],
[
"Anubhai",
"Rishita",
""
],
[
"Santos",
"Cicero Nogueira dos",
""
],
[
"Xiang",
"Bing",
""
],
[
"Soatto",
"Stefano",
""
]
] |
2101.05880 | Shenghui Li | Shenghui Li, Edith Ngai, Fanghua Ye, and Thiemo Voigt | Auto-weighted Robust Federated Learning with Corrupted Data Sources | null | ACM Transactions on Intelligent Systems and Technology (TIST) 13,5
(2022), 1-20 | 10.1145/3517821 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated learning provides a communication-efficient and privacy-preserving
training process by enabling learning statistical models with massive
participants while keeping their data in local clients. However, standard
federated learning techniques that naively minimize an average loss function
are vulnerable to data corruptions from outliers, systematic mislabeling, or
even adversaries. In addition, it is often prohibited for service providers to
verify the quality of data samples due to the increasing concern of user data
privacy. In this paper, we address this challenge by proposing Auto-weighted
Robust Federated Learning (arfl), a novel approach that jointly learns the
global model and the weights of local updates to provide robustness against
corrupted data sources. We prove a learning bound on the expected risk with
respect to the predictor and the weights of clients, which guides the
definition of the objective for robust federated learning. The weights are
allocated by comparing the empirical loss of a client with the average loss of
the best p clients (p-average), thus we can downweight the clients with
significantly high losses, thereby lower their contributions to the global
model. We show that this approach achieves robustness when the data of
corrupted clients is distributed differently from benign ones. To optimize the
objective function, we propose a communication-efficient algorithm based on the
blockwise minimization paradigm. We conduct experiments on multiple benchmark
datasets, including CIFAR-10, FEMNIST and Shakespeare, considering different
deep neural network models. The results show that our solution is robust
against different scenarios including label shuffling, label flipping and noisy
features, and outperforms the state-of-the-art methods in most scenarios.
| [
{
"created": "Thu, 14 Jan 2021 21:54:55 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 16:34:49 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Jul 2022 00:18:27 GMT",
"version": "v3"
}
] | 2022-07-15 | [
[
"Li",
"Shenghui",
""
],
[
"Ngai",
"Edith",
""
],
[
"Ye",
"Fanghua",
""
],
[
"Voigt",
"Thiemo",
""
]
] |
2101.05954 | Devshree Patel | Devshree Patel, Ratnam Parikh, and Yesha Shastri | Recent Advances in Video Question Answering: A Review of Datasets and
Methods | 18 pages, 5 tables, Video and Image Question Answering Workshop, 25th
International Conference on Pattern Recognition | Pattern Recognition. ICPR International Workshops and Challenges.
ICPR 2021. Lecture Notes in Computer Science, vol 12662. Springer | 10.1007/978-3-030-68790-8_27 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video Question Answering (VQA) is a recent emerging challenging task in the
field of Computer Vision. Several visual information retrieval techniques like
Video Captioning/Description and Video-guided Machine Translation have preceded
the task of VQA. VQA helps to retrieve temporal and spatial information from
the video scenes and interpret it. In this survey, we review a number of
methods and datasets for the task of VQA. To the best of our knowledge, no
previous survey has been conducted for the VQA task.
| [
{
"created": "Fri, 15 Jan 2021 03:26:24 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2021 14:30:16 GMT",
"version": "v2"
}
] | 2021-03-19 | [
[
"Patel",
"Devshree",
""
],
[
"Parikh",
"Ratnam",
""
],
[
"Shastri",
"Yesha",
""
]
] |
2101.06021 | Pei Wang | Pei Wang, Wei Sun, Qingsen Yan, Axi Niu, Rui Li, Yu Zhu, Jinqiu Sun,
Yanning Zhang | Non-uniform Motion Deblurring with Blurry Component Divided Guidance | null | Pattern Recognition,Volume 120, December 2021, 108082 | 10.1016/j.patcog.2021.108082 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blind image deblurring is a fundamental and challenging computer vision
problem, which aims to recover both the blur kernel and the latent sharp image
from only a blurry observation. Despite the superiority of deep learning
methods in image deblurring have displayed, there still exists major challenge
with various non-uniform motion blur. Previous methods simply take all the
image features as the input to the decoder, which handles different degrees
(e.g. large blur, small blur) simultaneously, leading to challenges for sharp
image generation. To tackle the above problems, we present a deep two-branch
network to deal with blurry images via a component divided module, which
divides an image into two components based on the representation of blurry
degree. Specifically, two component attentive blocks are employed to learn
attention maps to exploit useful deblurring feature representations on both
large and small blurry regions. Then, the blur-aware features are fed into
two-branch reconstruction decoders respectively. In addition, a new feature
fusion mechanism, orientation-based feature fusion, is proposed to merge sharp
features of the two branches. Both qualitative and quantitative experimental
results show that our method performs favorably against the state-of-the-art
approaches.
| [
{
"created": "Fri, 15 Jan 2021 09:10:35 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Wang",
"Pei",
""
],
[
"Sun",
"Wei",
""
],
[
"Yan",
"Qingsen",
""
],
[
"Niu",
"Axi",
""
],
[
"Li",
"Rui",
""
],
[
"Zhu",
"Yu",
""
],
[
"Sun",
"Jinqiu",
""
],
[
"Zhang",
"Yanning",
""
]
] |
2101.06040 | Evangelos Mazomenos | Patrick Brandao, Odysseas Zisimopoulos, Evangelos Mazomenos, Gastone
Ciuti, Jorge Bernal, Marco Visentini-Scarzanella, Arianna Menciassi, Paolo
Dario, Anastasios Koulaouzidis, Alberto Arezzo, David J Hawkes, Danail
Stoyanov | Towards a Computed-Aided Diagnosis System in Colonoscopy: Automatic
Polyp Segmentation Using Convolution Neural Networks | 10 pages, 6 figures | Journal of Medical Robotics Research, Volume 03, No. 02, 1840002
(2018) G | 10.1142/S2424905X18400020 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Early diagnosis is essential for the successful treatment of bowel cancers
including colorectal cancer (CRC) and capsule endoscopic imaging with robotic
actuation can be a valuable diagnostic tool when combined with automated image
analysis. We present a deep learning rooted detection and segmentation
framework for recognizing lesions in colonoscopy and capsule endoscopy images.
We restructure established convolution architectures, such as VGG and ResNets,
by converting them into fully-connected convolution networks (FCNs), fine-tune
them and study their capabilities for polyp segmentation and detection. We
additionally use Shape from-Shading (SfS) to recover depth and provide a richer
representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB
information and we demonstrate that the resulting network yields improved
performance. Our networks are tested on publicly available datasets and the
most accurate segmentation model achieved a mean segmentation IU of 47.78% and
56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp
detection, the top performing models we propose surpass the current state of
the art with detection recalls superior to 90% for all datasets tested. To our
knowledge, we present the first work to use FCNs for polyp segmentation in
addition to proposing a novel combination of SfS and RGB that boosts
performance
| [
{
"created": "Fri, 15 Jan 2021 10:08:53 GMT",
"version": "v1"
}
] | 2021-01-18 | [
[
"Brandao",
"Patrick",
""
],
[
"Zisimopoulos",
"Odysseas",
""
],
[
"Mazomenos",
"Evangelos",
""
],
[
"Ciuti",
"Gastone",
""
],
[
"Bernal",
"Jorge",
""
],
[
"Visentini-Scarzanella",
"Marco",
""
],
[
"Menciassi",
"Arianna",
""
],
[
"Dario",
"Paolo",
""
],
[
"Koulaouzidis",
"Anastasios",
""
],
[
"Arezzo",
"Alberto",
""
],
[
"Hawkes",
"David J",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
End of preview. Expand
in Dataset Viewer.
ArXiv AI Paper Dump
This dataset contains 11,052 high-quality arXiv AI-related papers converted into txt format for NLP tasks. Papers are selected per following criteria:
- Publishing year (first version) > 2020
- Journal / Conferences records.
- Under category cs.AI / cs.CL / cs.CV
See cs_metadata_2020.json
for more info on individual papers. Thanks to the open efforts of ArXiv team.
- Downloads last month
- 38