arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.01742 | 2023-05-10T18:38:48Z | Beyond Negativity: Re-Analysis and Follow-Up Experiments on Hope Speech
Detection | [
"Neemesh Yadav",
"Mohammad Aflah Khan",
"Diksha Sethi",
"Raghav Sahni"
] | Health experts assert that hope plays a crucial role in enhancing
individuals' physical and mental well-being, facilitating their recovery, and
promoting restoration. Hope speech refers to comments, posts and other social
media messages that offer support, reassurance, suggestions, inspiration, and
insight. The detection of hope speech involves the analysis of such textual
content, with the aim of identifying messages that invoke positive emotions in
people. Our study aims to find computationally efficient yet
comparable/superior methods for hope speech detection. We also make our
codebase public at https://github.com/aflah02/Hope_Speech_Detection | [
"cs.CL",
"cs.LG"
] | false |
2305.05964 | 2023-05-10T08:16:36Z | Interpretable Multimodal Misinformation Detection with Logic Reasoning | [
"Hui Liu",
"Wenya Wang",
"Haoliang Li"
] | Multimodal misinformation on online social platforms is becoming a critical
concern due to increasing credibility and easier dissemination brought by
multimedia content, compared to traditional text-only information. While
existing multimodal detection approaches have achieved high performance, the
lack of interpretability hinders these systems' reliability and practical
deployment. Inspired by NeuralSymbolic AI which combines the learning ability
of neural networks with the explainability of symbolic learning, we propose a
novel logic-based neural model for multimodal misinformation detection which
integrates interpretable logic clauses to express the reasoning process of the
target task. To make learning effective, we parameterize symbolic logical
elements using neural representations, which facilitate the automatic
generation and evaluation of meaningful logic clauses. Additionally, to make
our framework generalizable across diverse misinformation sources, we introduce
five meta-predicates that can be instantiated with different correlations.
Results on three public datasets (Twitter, Weibo, and Sarcasm) demonstrate the
feasibility and versatility of our model. | [
"cs.MM",
"cs.AI",
"cs.CL"
] | false |
2305.05982 | 2023-05-10T08:48:53Z | Generating medically-accurate summaries of patient-provider dialogue: A
multi-stage approach using large language models | [
"Varun Nair",
"Elliot Schumacher",
"Anitha Kannan"
] | A medical provider's summary of a patient visit serves several critical
purposes, including clinical decision-making, facilitating hand-offs between
providers, and as a reference for the patient. An effective summary is required
to be coherent and accurately capture all the medically relevant information in
the dialogue, despite the complexity of patient-generated language. Even minor
inaccuracies in visit summaries (for example, summarizing "patient does not
have a fever" when a fever is present) can be detrimental to the outcome of
care for the patient.
This paper tackles the problem of medical conversation summarization by
discretizing the task into several smaller dialogue-understanding tasks that
are sequentially built upon. First, we identify medical entities and their
affirmations within the conversation to serve as building blocks. We study
dynamically constructing few-shot prompts for tasks by conditioning on relevant
patient information and use GPT-3 as the backbone for our experiments. We also
develop GPT-derived summarization metrics to measure performance against
reference summaries quantitatively. Both our human evaluation study and metrics
for medical correctness show that summaries generated using this approach are
clinically accurate and outperform the baseline approach of summarizing the
dialog in a zero-shot, single-prompt setting. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.06485 | 2023-05-10T22:29:12Z | Multimodal Contextualized Plan Prediction for Embodied Task Completion | [
"Mert İnan",
"Aishwarya Padmakumar",
"Spandana Gella",
"Patrick Lange",
"Dilek Hakkani-Tur"
] | Task planning is an important component of traditional robotics systems
enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable
actions for task completion in simulated embodied agents is focused on directly
predicting low level action sequences that would be expected to be directly
executable by a physical robot. In this work, we instead focus on predicting a
higher level plan representation for one such embodied task completion dataset
- TEACh, under the assumption that techniques for high-level plan prediction
from natural language are expected to be more transferable to physical robot
systems. We demonstrate that better plans can be predicted using multimodal
context, and that plan prediction and plan execution modules are likely
dependent on each other and hence it may not be ideal to fully decouple them.
Further, we benchmark execution of oracle plans to quantify the scope for
improvement in plan prediction models. | [
"cs.RO",
"cs.AI",
"cs.CL",
"cs.HC"
] | false |
2305.11061 | 2023-05-10T10:01:36Z | SPSQL: Step-by-step Parsing Based Framework for Text-to-SQL Generation | [
"Ran Shen",
"Gang Sun",
"Hao Shen",
"Yiling Li",
"Liangfeng Jin",
"Han Jiang"
] | Converting text into the structured query language (Text2SQL) is a research
hotspot in the field of natural language processing (NLP), which has broad
application prospects. In the era of big data, the use of databases has
penetrated all walks of life, in which the collected data is large in scale,
diverse in variety, and wide in scope, making the data query cumbersome and
inefficient, and putting forward higher requirements for the Text2SQL model. In
practical applications, the current mainstream end-to-end Text2SQL model is not
only difficult to build due to its complex structure and high requirements for
training data, but also difficult to adjust due to massive parameters. In
addition, the accuracy of the model is hard to achieve the desired result.
Based on this, this paper proposes a pipelined Text2SQL method: SPSQL. This
method disassembles the Text2SQL task into four subtasks--table selection,
column selection, SQL generation, and value filling, which can be converted
into a text classification problem, a sequence labeling problem, and two text
generation problems, respectively. Then, we construct data formats of different
subtasks based on existing data and improve the accuracy of the overall model
by improving the accuracy of each submodel. We also use the named entity
recognition module and data augmentation to optimize the overall model. We
construct the dataset based on the marketing business data of the State Grid
Corporation of China. Experiments demonstrate our proposed method achieves the
best performance compared with the end-to-end method and other pipeline
methods. | [
"cs.CL",
"cs.AI",
"cs.DB"
] | false |
2305.11070 | 2023-05-10T10:57:21Z | Enriching language models with graph-based context information to better
understand textual data | [
"Albert Roethel",
"Maria Ganzha",
"Anna Wróblewska"
] | A considerable number of texts encountered daily are somehow connected with
each other. For example, Wikipedia articles refer to other articles via
hyperlinks, scientific papers relate to others via citations or (co)authors,
while tweets relate via users that follow each other or reshare content. Hence,
a graph-like structure can represent existing connections and be seen as
capturing the "context" of the texts. The question thus arises if extracting
and integrating such context information into a language model might help
facilitate a better automated understanding of the text. In this study, we
experimentally demonstrate that incorporating graph-based contextualization
into BERT model enhances its performance on an example of a classification
task. Specifically, on Pubmed dataset, we observed a reduction in error from
8.51% to 7.96%, while increasing the number of parameters just by 1.6%.
Our source code: https://github.com/tryptofanik/gc-bert | [
"cs.CL",
"cs.AI",
"cs.LG",
"cs.NE"
] | false |
2305.06087 | 2023-05-10T12:10:51Z | A Glimpse in ChatGPT Capabilities and its impact for AI research | [
"Frank Joublin",
"Antonello Ceravola",
"Joerg Deigmoeller",
"Michael Gienger",
"Mathias Franzius",
"Julian Eggert"
] | Large language models (LLMs) have recently become a popular topic in the
field of Artificial Intelligence (AI) research, with companies such as Google,
Amazon, Facebook, Amazon, Tesla, and Apple (GAFA) investing heavily in their
development. These models are trained on massive amounts of data and can be
used for a wide range of tasks, including language translation, text
generation, and question answering. However, the computational resources
required to train and run these models are substantial, and the cost of
hardware and electricity can be prohibitive for research labs that do not have
the funding and resources of the GAFA. In this paper, we will examine the
impact of LLMs on AI research. The pace at which such models are generated as
well as the range of domains covered is an indication of the trend which not
only the public but also the scientific community is currently experiencing. We
give some examples on how to use such models in research by focusing on
GPT3.5/ChatGPT3.4 and ChatGPT4 at the current state and show that such a range
of capabilities in a single system is a strong sign of approaching general
intelligence. Innovations integrating such models will also expand along the
maturation of such AI systems and exhibit unforeseeable applications that will
have important impacts on several aspects of our societies. | [
"cs.AI",
"cs.CL",
"cs.HC",
"cs.LG",
"cs.RO",
"I.2; I.7"
] | false |
2305.06429 | 2023-05-10T19:31:25Z | Mispronunciation Detection of Basic Quranic Recitation Rules using Deep
Learning | [
"Ahmad Al Harere",
"Khloud Al Jallad"
] | In Islam, readers must apply a set of pronunciation rules called Tajweed
rules to recite the Quran in the same way that the angel Jibrael taught the
Prophet, Muhammad. The traditional process of learning the correct application
of these rules requires a human who must have a license and great experience to
detect mispronunciation. Due to the increasing number of Muslims around the
world, the number of Tajweed teachers is not enough nowadays for daily
recitation practice for every Muslim. Therefore, lots of work has been done for
automatic Tajweed rules' mispronunciation detection to help readers recite
Quran correctly in an easier way and shorter time than traditional learning
ways. All previous works have three common problems. First, most of them
focused on machine learning algorithms only. Second, they used private datasets
with no benchmark to compare with. Third, they did not take into consideration
the sequence of input data optimally, although the speech signal is time
series. To overcome these problems, we proposed a solution that consists of
Mel-Frequency Cepstral Coefficient (MFCC) features with Long Short-Term Memory
(LSTM) neural networks which use the time series, to detect mispronunciation in
Tajweed rules. In addition, our experiments were performed on a public dataset,
the QDAT dataset, which contains more than 1500 voices of the correct and
incorrect recitation of three Tajweed rules (Separate stretching , Tight Noon ,
and Hide ). To the best of our knowledge, the QDAT dataset has not been used by
any research paper yet. We compared the performance of the proposed LSTM model
with traditional machine learning algorithms used in SoTA. The LSTM model with
time series showed clear superiority over traditional machine learning. The
accuracy achieved by LSTM on the QDAT dataset was 96%, 95%, and 96% for the
three rules (Separate stretching, Tight Noon, and Hide), respectively. | [
"cs.SD",
"cs.AI",
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.07034 | 2023-05-10T18:40:01Z | Quran Recitation Recognition using End-to-End Deep Learning | [
"Ahmad Al Harere",
"Khloud Al Jallad"
] | The Quran is the holy scripture of Islam, and its recitation is an important
aspect of the religion. Recognizing the recitation of the Holy Quran
automatically is a challenging task due to its unique rules that are not
applied in normal speaking speeches. A lot of research has been done in this
domain, but previous works have detected recitation errors as a classification
task or used traditional automatic speech recognition (ASR). In this paper, we
proposed a novel end-to-end deep learning model for recognizing the recitation
of the Holy Quran. The proposed model is a CNN-Bidirectional GRU encoder that
uses CTC as an objective function, and a character-based decoder which is a
beam search decoder. Moreover, all previous works were done on small private
datasets consisting of short verses and a few chapters of the Holy Quran. As a
result of using private datasets, no comparisons were done. To overcome this
issue, we used a public dataset that has recently been published (Ar-DAD) and
contains about 37 chapters that were recited by 30 reciters, with different
recitation speeds and different types of pronunciation rules. The proposed
model performance was evaluated using the most common evaluation metrics in
speech recognition, word error rate (WER), and character error rate (CER). The
results were 8.34% WER and 2.42% CER. We hope this research will be a baseline
for comparisons with future research on this public new dataset (Ar-DAD). | [
"eess.AS",
"cs.AI",
"cs.CL",
"cs.LG",
"cs.SD"
] | false |
2305.06934 | 2023-05-10T08:16:46Z | Humans are Still Better than ChatGPT: Case of the IEEEXtreme Competition | [
"Anis Koubaa",
"Basit Qureshi",
"Adel Ammar",
"Zahid Khan",
"Wadii Boulila",
"Lahouari Ghouti"
] | Since the release of ChatGPT, numerous studies have highlighted the
remarkable performance of ChatGPT, which often rivals or even surpasses human
capabilities in various tasks and domains. However, this paper presents a
contrasting perspective by demonstrating an instance where human performance
excels in typical tasks suited for ChatGPT, specifically in the domain of
computer programming. We utilize the IEEExtreme Challenge competition as a
benchmark, a prestigious, annual international programming contest encompassing
a wide range of problems with different complexities. To conduct a thorough
evaluation, we selected and executed a diverse set of 102 challenges, drawn
from five distinct IEEExtreme editions, using three major programming
languages: Python, Java, and C++. Our empirical analysis provides evidence that
contrary to popular belief, human programmers maintain a competitive edge over
ChatGPT in certain aspects of problem-solving within the programming context.
In fact, we found that the average score obtained by ChatGPT on the set of
IEEExtreme programming problems is 3.9 to 5.8 times lower than the average
human score, depending on the programming language. This paper elaborates on
these findings, offering critical insights into the limitations and potential
areas of improvement for AI-based language models like ChatGPT. | [
"cs.SE",
"cs.AI",
"cs.CL",
"cs.CY",
"cs.LG",
"cs.PL"
] | false |
2305.05882 | 2023-05-10T04:02:08Z | Deep Partial Multi-Label Learning with Graph Disambiguation | [
"Haobo Wang",
"Shisong Yang",
"Gengyu Lyu",
"Weiwei Liu",
"Tianlei Hu",
"Ke Chen",
"Songhe Feng",
"Gang Chen"
] | In partial multi-label learning (PML), each data example is equipped with a
candidate label set, which consists of multiple ground-truth labels and other
false-positive labels. Recently, graph-based methods, which demonstrate a good
ability to estimate accurate confidence scores from candidate labels, have been
prevalent to deal with PML problems. However, we observe that existing
graph-based PML methods typically adopt linear multi-label classifiers and thus
fail to achieve superior performance. In this work, we attempt to remove
several obstacles for extending them to deep models and propose a novel deep
Partial multi-Label model with grAph-disambIguatioN (PLAIN). Specifically, we
introduce the instance-level and label-level similarities to recover label
confidences as well as exploit label dependencies. At each training epoch,
labels are propagated on the instance and label graphs to produce relatively
accurate pseudo-labels; then, we train the deep model to fit the numerical
labels. Moreover, we provide a careful analysis of the risk functions to
guarantee the robustness of the proposed model. Extensive experiments on
various synthetic datasets and three real-world PML datasets demonstrate that
PLAIN achieves significantly superior results to state-of-the-art methods. | [
"cs.LG"
] | false |
2305.06090 | 2023-05-10T12:17:52Z | XTab: Cross-table Pretraining for Tabular Transformers | [
"Bingzhao Zhu",
"Xingjian Shi",
"Nick Erickson",
"Mu Li",
"George Karypis",
"Mahsa Shoaran"
] | The success of self-supervised learning in computer vision and natural
language processing has motivated pretraining methods on tabular data. However,
most existing tabular self-supervised learning models fail to leverage
information across multiple data tables and cannot generalize to new tables. In
this work, we introduce XTab, a framework for cross-table pretraining of
tabular transformers on datasets from various domains. We address the challenge
of inconsistent column types and quantities among tables by utilizing
independent featurizers and using federated learning to pretrain the shared
component. Tested on 84 tabular prediction tasks from the OpenML-AutoML
Benchmark (AMLB), we show that (1) XTab consistently boosts the
generalizability, learning speed, and performance of multiple tabular
transformers, (2) by pretraining FT-Transformer via XTab, we achieve superior
performance than other state-of-the-art tabular deep learning models on various
tasks such as regression, binary, and multiclass classification. | [
"cs.LG"
] | false |
2305.06109 | 2023-05-10T12:53:18Z | XMI-ICU: Explainable Machine Learning Model for Pseudo-Dynamic
Prediction of Mortality in the ICU for Heart Attack Patients | [
"Munib Mesinovic",
"Peter Watkinson",
"Tingting Zhu"
] | Heart attack remain one of the greatest contributors to mortality in the
United States and globally. Patients admitted to the intensive care unit (ICU)
with diagnosed heart attack (myocardial infarction or MI) are at higher risk of
death. In this study, we use two retrospective cohorts extracted from the eICU
and MIMIC-IV databases, to develop a novel pseudo-dynamic machine learning
framework for mortality prediction in the ICU with interpretability and
clinical risk analysis. The method provides accurate prediction for ICU
patients up to 24 hours before the event and provide time-resolved
interpretability results. The performance of the framework relying on extreme
gradient boosting was evaluated on a held-out test set from eICU, and
externally validated on the MIMIC-IV cohort using the most important features
identified by time-resolved Shapley values achieving AUCs of 91.0 (balanced
accuracy of 82.3) for 6-hour prediction of mortality respectively. We show that
our framework successfully leverages time-series physiological measurements by
translating them into stacked static prediction problems to be robustly
predictive through time in the ICU stay and can offer clinical insight from
time-resolved interpretability | [
"cs.LG"
] | false |
2305.05816 | 2023-05-10T00:09:07Z | Best-Effort Adaptation | [
"Pranjal Awasthi",
"Corinna Cortes",
"Mehryar Mohri"
] | We study a problem of best-effort adaptation motivated by several
applications and considerations, which consists of determining an accurate
predictor for a target domain, for which a moderate amount of labeled samples
are available, while leveraging information from another domain for which
substantially more labeled samples are at one's disposal. We present a new and
general discrepancy-based theoretical analysis of sample reweighting methods,
including bounds holding uniformly over the weights. We show how these bounds
can guide the design of learning algorithms that we discuss in detail. We
further show that our learning guarantees and algorithms provide improved
solutions for standard domain adaptation problems, for which few labeled data
or none are available from the target domain. We finally report the results of
a series of experiments demonstrating the effectiveness of our best-effort
adaptation and domain adaptation algorithms, as well as comparisons with
several baselines. We also discuss how our analysis can benefit the design of
principled solutions for fine-tuning. | [
"cs.LG",
"stat.ML"
] | false |
2305.05827 | 2023-05-10T01:11:35Z | Inclusive FinTech Lending via Contrastive Learning and Domain Adaptation | [
"Xiyang Hu",
"Yan Huang",
"Beibei Li",
"Tian Lu"
] | FinTech lending (e.g., micro-lending) has played a significant role in
facilitating financial inclusion. It has reduced processing times and costs,
enhanced the user experience, and made it possible for people to obtain loans
who may not have qualified for credit from traditional lenders. However, there
are concerns about the potentially biased algorithmic decision-making during
loan screening. Machine learning algorithms used to evaluate credit quality can
be influenced by representation bias in the training data, as we only have
access to the default outcome labels of approved loan applications, for which
the borrowers' socioeconomic characteristics are better than those of rejected
ones. In this case, the model trained on the labeled data performs well on the
historically approved population, but does not generalize well to borrowers of
low socioeconomic background. In this paper, we investigate the problem of
representation bias in loan screening for a real-world FinTech lending
platform. We propose a new Transformer-based sequential loan screening model
with self-supervised contrastive learning and domain adaptation to tackle this
challenging issue. We use contrastive learning to train our feature extractor
on unapproved (unlabeled) loan applications and use domain adaptation to
generalize the performance of our label predictor. We demonstrate the
effectiveness of our model through extensive experimentation in the real-world
micro-lending setting. Our results show that our model significantly promotes
the inclusiveness of funding decisions, while also improving loan screening
accuracy and profit by 7.10% and 8.95%, respectively. We also show that
incorporating the test data into contrastive learning and domain adaptation and
labeling a small ratio of test data can further boost model performance. | [
"cs.LG",
"cs.CY"
] | false |
2305.05828 | 2023-05-10T01:12:11Z | Convergence of a Normal Map-based Prox-SGD Method under the KL
Inequality | [
"Andre Milzarek",
"Junwen Qiu"
] | In this paper, we present a novel stochastic normal map-based algorithm
($\mathsf{norM}\text{-}\mathsf{SGD}$) for nonconvex composite-type optimization
problems and discuss its convergence properties. Using a time window-based
strategy, we first analyze the global convergence behavior of
$\mathsf{norM}\text{-}\mathsf{SGD}$ and it is shown that every accumulation
point of the generated sequence of iterates $\{\boldsymbol{x}^k\}_k$
corresponds to a stationary point almost surely and in an expectation sense.
The obtained results hold under standard assumptions and extend the more
limited convergence guarantees of the basic proximal stochastic gradient
method. In addition, based on the well-known Kurdyka-{\L}ojasiewicz (KL)
analysis framework, we provide novel point-wise convergence results for the
iterates $\{\boldsymbol{x}^k\}_k$ and derive convergence rates that depend on
the underlying KL exponent $\boldsymbol{\theta}$ and the step size dynamics
$\{\alpha_k\}_k$. Specifically, for the popular step size scheme
$\alpha_k=\mathcal{O}(1/k^\gamma)$, $\gamma \in (\frac23,1]$, (almost sure)
rates of the form $\|\boldsymbol{x}^k-\boldsymbol{x}^*\| = \mathcal{O}(1/k^p)$,
$p \in (0,\frac12)$, can be established. The obtained rates are faster than
related and existing convergence rates for $\mathsf{SGD}$ and improve on the
non-asymptotic complexity bounds for $\mathsf{norM}\text{-}\mathsf{SGD}$. | [
"math.OC",
"cs.LG",
"90C26, 90C15"
] | false |
2305.05920 | 2023-05-10T06:17:50Z | Fast Distributed Inference Serving for Large Language Models | [
"Bingyang Wu",
"Yinmin Zhong",
"Zili Zhang",
"Gang Huang",
"Xuanzhe Liu",
"Xin Jin"
] | Large language models (LLMs) power a new generation of interactive AI
applications exemplified by ChatGPT. The interactive nature of these
applications demand low job completion time (JCT) for model inference. Existing
LLM serving systems use run-to-completion processing for inference jobs, which
suffers from head-of-line blocking and long JCT. We present FastServe, a
distributed inference serving system for LLMs. FastServe exploits the
autoregressive pattern of LLM inference to enable preemption at the granularity
of each output token. FastServe uses preemptive scheduling to minimize JCT with
a novel skip-join Multi-Level Feedback Queue scheduler. Based on the new semi
information-agnostic setting of LLM inference, the scheduler leverages the
input length information to assign an appropriate initial queue for each
arrival job to join. The higher priority queues than the joined queue are
skipped to reduce demotions. We design an efficient GPU memory management
mechanism that proactively offloads and uploads intermediate states between GPU
memory and host memory for LLM inference. We build a system prototype of
FastServe based on NVIDIA FasterTransformer. Experimental results show that
compared to the state-of-the-art solution Orca, FastServe improves the average
and tail JCT by up to 5.1$\times$ and 6.4$\times$, respectively. | [
"cs.LG",
"cs.DC"
] | false |
2305.06055 | 2023-05-10T11:15:22Z | A Classification of Feedback Loops and Their Relation to Biases in
Automated Decision-Making Systems | [
"Nicolò Pagan",
"Joachim Baumann",
"Ezzat Elokda",
"Giulia De Pasquale",
"Saverio Bolognani",
"Anikó Hannák"
] | Prediction-based decision-making systems are becoming increasingly prevalent
in various domains. Previous studies have demonstrated that such systems are
vulnerable to runaway feedback loops, e.g., when police are repeatedly sent
back to the same neighborhoods regardless of the actual rate of criminal
activity, which exacerbate existing biases. In practice, the automated
decisions have dynamic feedback effects on the system itself that can
perpetuate over time, making it difficult for short-sighted design choices to
control the system's evolution. While researchers started proposing longer-term
solutions to prevent adverse outcomes (such as bias towards certain groups),
these interventions largely depend on ad hoc modeling assumptions and a
rigorous theoretical understanding of the feedback dynamics in ML-based
decision-making systems is currently missing. In this paper, we use the
language of dynamical systems theory, a branch of applied mathematics that
deals with the analysis of the interconnection of systems with dynamic
behaviors, to rigorously classify the different types of feedback loops in the
ML-based decision-making pipeline. By reviewing existing scholarly work, we
show that this classification covers many examples discussed in the algorithmic
fairness community, thereby providing a unifying and principled framework to
study feedback loops. By qualitative analysis, and through a simulation example
of recommender systems, we show which specific types of ML biases are affected
by each type of feedback loop. We find that the existence of feedback loops in
the ML-based decision-making pipeline can perpetuate, reinforce, or even reduce
ML biases. | [
"cs.CY",
"cs.LG"
] | false |
2305.06058 | 2023-05-10T11:24:27Z | Compressing neural network by tensor network with exponentially fewer
variational parameters | [
"Yong Qing",
"Peng-Fei Zhou",
"Ke Li",
"Shi-Ju Ran"
] | Neural network (NN) designed for challenging machine learning tasks is in
general a highly nonlinear mapping that contains massive variational
parameters. High complexity of NN, if unbounded or unconstrained, might
unpredictably cause severe issues including over-fitting, loss of
generalization power, and unbearable cost of hardware. In this work, we propose
a general compression scheme that significantly reduces the variational
parameters of NN by encoding them to multi-layer tensor networks (TN's) that
contain exponentially-fewer free parameters. Superior compression performance
of our scheme is demonstrated on several widely-recognized NN's (FC-2, LeNet-5,
and VGG-16) and datasets (MNIST and CIFAR-10), surpassing the state-of-the-art
method based on shallow tensor networks. For instance, about 10 million
parameters in the three convolutional layers of VGG-16 are compressed in TN's
with just $632$ parameters, while the testing accuracy on CIFAR-10 is
surprisingly improved from $81.14\%$ by the original NN to $84.36\%$ after
compression. Our work suggests TN as an exceptionally efficient mathematical
structure for representing the variational parameters of NN's, which superiorly
exploits the compressibility than the simple multi-way arrays. | [
"cs.LG",
"cs.AI"
] | false |
2305.06102 | 2023-05-10T12:42:31Z | Towards Better Graph Representation Learning with Parameterized
Decomposition & Filtering | [
"Mingqi Yang",
"Wenjie Feng",
"Yanming Shen",
"Bryan Hooi"
] | Proposing an effective and flexible matrix to represent a graph is a
fundamental challenge that has been explored from multiple perspectives, e.g.,
filtering in Graph Fourier Transforms. In this work, we develop a novel and
general framework which unifies many existing GNN models from the view of
parameterized decomposition and filtering, and show how it helps to enhance the
flexibility of GNNs while alleviating the smoothness and amplification issues
of existing models. Essentially, we show that the extensively studied spectral
graph convolutions with learnable polynomial filters are constrained variants
of this formulation, and releasing these constraints enables our model to
express the desired decomposition and filtering simultaneously. Based on this
generalized framework, we develop models that are simple in implementation but
achieve significant improvements and computational efficiency on a variety of
graph learning tasks. Code is available at https://github.com/qslim/PDF. | [
"cs.LG",
"cs.AI"
] | false |
2305.06140 | 2023-05-10T13:42:56Z | CrudeBERT: Applying Economic Theory towards fine-tuning
Transformer-based Sentiment Analysis Models to the Crude Oil Market | [
"Himmet Kaplan",
"Ralf-Peter Mundani",
"Heiko Rölke",
"Albert Weichselbraun"
] | Predicting market movements based on the sentiment of news media has a long
tradition in data analysis. With advances in natural language processing,
transformer architectures have emerged that enable contextually aware sentiment
classification. Nevertheless, current methods built for the general financial
market such as FinBERT cannot distinguish asset-specific value-driving factors.
This paper addresses this shortcoming by presenting a method that identifies
and classifies events that impact supply and demand in the crude oil markets
within a large corpus of relevant news headlines. We then introduce CrudeBERT,
a new sentiment analysis model that draws upon these events to contextualize
and fine-tune FinBERT, thereby yielding improved sentiment classifications for
headlines related to the crude oil futures market. An extensive evaluation
demonstrates that CrudeBERT outperforms proprietary and open-source solutions
in the domain of crude oil. | [
"cs.IR",
"cs.LG",
"H.3; H.4; I.2.7"
] | false |
2305.06249 | 2023-05-10T15:32:22Z | Deep Reinforcement Learning Based Resource Allocation for Cloud Native
Wireless Network | [
"Lin Wang",
"Jiasheng Wu",
"Yue Gao",
"Jingjing Zhang"
] | Cloud native technology has revolutionized 5G beyond and 6G communication
networks, offering unprecedented levels of operational automation, flexibility,
and adaptability. However, the vast array of cloud native services and
applications presents a new challenge in resource allocation for dynamic cloud
computing environments. To tackle this challenge, we investigate a cloud native
wireless architecture that employs container-based virtualization to enable
flexible service deployment. We then study two representative use cases:
network slicing and Multi-Access Edge Computing. To optimize resource
allocation in these scenarios, we leverage deep reinforcement learning
techniques and introduce two model-free algorithms capable of monitoring the
network state and dynamically training allocation policies. We validate the
effectiveness of our algorithms in a testbed developed using Free5gc. Our
findings demonstrate significant improvements in network efficiency,
underscoring the potential of our proposed techniques in unlocking the full
potential of cloud native wireless networks. | [
"cs.NI",
"cs.LG"
] | false |
2305.06398 | 2023-05-10T18:16:04Z | Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning | [
"Jean Vassoyan",
"Jill-Jênn Vie",
"Pirmin Lemberger"
] | Adaptive learning is an area of educational technology that consists in
delivering personalized learning experiences to address the unique needs of
each learner. An important subfield of adaptive learning is learning path
personalization: it aims at designing systems that recommend sequences of
educational activities to maximize students' learning outcomes. Many machine
learning approaches have already demonstrated significant results in a variety
of contexts related to learning path personalization. However, most of them
were designed for very specific settings and are not very reusable. This is
accentuated by the fact that they often rely on non-scalable models, which are
unable to integrate new elements after being trained on a specific set of
educational resources. In this paper, we introduce a flexible and scalable
approach towards the problem of learning path personalization, which we
formalize as a reinforcement learning problem. Our model is a sequential
recommender system based on a graph neural network, which we evaluate on a
population of simulated learners. Our results demonstrate that it can learn to
make good recommendations in the small-data regime. | [
"cs.LG",
"cs.AI"
] | false |
2305.06442 | 2023-05-10T20:24:38Z | Data, Trees, and Forests -- Decision Tree Learning in K-12 Education | [
"Tilman Michaeli",
"Stefan Seegerer",
"Lennard Kerber",
"Ralf Romeike"
] | As a consequence of the increasing influence of machine learning on our
lives, everyone needs competencies to understand corresponding phenomena, but
also to get involved in shaping our world and making informed decisions
regarding the influences on our society. Therefore, in K-12 education, students
need to learn about core ideas and principles of machine learning. However, for
this target group, achieving all of the aforementioned goals presents an
enormous challenge. To this end, we present a teaching concept that combines a
playful and accessible unplugged approach focusing on conceptual understanding
with empowering students to actively apply machine learning methods and reflect
their influence on society, building upon decision tree learning. | [
"cs.CY",
"cs.LG",
"K.3.2; I.2.6"
] | false |
2305.06473 | 2023-05-10T21:39:27Z | Securing Distributed SGD against Gradient Leakage Threats | [
"Wenqi Wei",
"Ling Liu",
"Jingya Zhou",
"Ka-Ho Chow",
"Yanzhao Wu"
] | This paper presents a holistic approach to gradient leakage resilient
distributed Stochastic Gradient Descent (SGD). First, we analyze two types of
strategies for privacy-enhanced federated learning: (i) gradient pruning with
random selection or low-rank filtering and (ii) gradient perturbation with
additive random noise or differential privacy noise. We analyze the inherent
limitations of these approaches and their underlying impact on privacy
guarantee, model accuracy, and attack resilience. Next, we present a gradient
leakage resilient approach to securing distributed SGD in federated learning,
with differential privacy controlled noise as the tool. Unlike conventional
methods with the per-client federated noise injection and fixed noise parameter
strategy, our approach keeps track of the trend of per-example gradient
updates. It makes adaptive noise injection closely aligned throughout the
federated model training. Finally, we provide an empirical privacy analysis on
the privacy guarantee, model utility, and attack resilience of the proposed
approach. Extensive evaluation using five benchmark datasets demonstrates that
our gradient leakage resilient approach can outperform the state-of-the-art
methods with competitive accuracy performance, strong differential privacy
guarantee, and high resilience against gradient leakage attacks. The code
associated with this paper can be found:
https://github.com/git-disl/Fed-alphaCDP. | [
"cs.LG",
"cs.CR"
] | false |
2305.06474 | 2023-05-10T21:43:42Z | Do LLMs Understand User Preferences? Evaluating LLMs On User Rating
Prediction | [
"Wang-Cheng Kang",
"Jianmo Ni",
"Nikhil Mehta",
"Maheswaran Sathiamoorthy",
"Lichan Hong",
"Ed Chi",
"Derek Zhiyuan Cheng"
] | Large Language Models (LLMs) have demonstrated exceptional capabilities in
generalizing to new tasks in a zero-shot or few-shot manner. However, the
extent to which LLMs can comprehend user preferences based on their previous
behavior remains an emerging and still unclear research question.
Traditionally, Collaborative Filtering (CF) has been the most effective method
for these tasks, predominantly relying on the extensive volume of rating data.
In contrast, LLMs typically demand considerably less data while maintaining an
exhaustive world knowledge about each item, such as movies or products. In this
paper, we conduct a thorough examination of both CF and LLMs within the classic
task of user rating prediction, which involves predicting a user's rating for a
candidate item based on their past ratings. We investigate various LLMs in
different sizes, ranging from 250M to 540B parameters and evaluate their
performance in zero-shot, few-shot, and fine-tuning scenarios. We conduct
comprehensive analysis to compare between LLMs and strong CF methods, and find
that zero-shot LLMs lag behind traditional recommender models that have the
access to user interaction data, indicating the importance of user interaction
data. However, through fine-tuning, LLMs achieve comparable or even better
performance with only a small fraction of the training data, demonstrating
their potential through data efficiency. | [
"cs.IR",
"cs.LG"
] | true |
2305.05840 | 2023-05-10T02:09:19Z | Achieving Diversity in Counterfactual Explanations: a Review and
Discussion | [
"Thibault Laugel",
"Adulam Jeyasothy",
"Marie-Jeanne Lesot",
"Christophe Marsala",
"Marcin Detyniecki"
] | In the field of Explainable Artificial Intelligence (XAI), counterfactual
examples explain to a user the predictions of a trained decision model by
indicating the modifications to be made to the instance so as to change its
associated prediction. These counterfactual examples are generally defined as
solutions to an optimization problem whose cost function combines several
criteria that quantify desiderata for a good explanation meeting user needs. A
large variety of such appropriate properties can be considered, as the user
needs are generally unknown and differ from one user to another; their
selection and formalization is difficult. To circumvent this issue, several
approaches propose to generate, rather than a single one, a set of diverse
counterfactual examples to explain a prediction. This paper proposes a review
of the numerous, sometimes conflicting, definitions that have been proposed for
this notion of diversity. It discusses their underlying principles as well as
the hypotheses on the user needs they rely on and proposes to categorize them
along several dimensions (explicit vs implicit, universe in which they are
defined, level at which they apply), leading to the identification of further
research challenges on this topic. | [
"cs.AI",
"cs.LG",
"stat.ME"
] | false |
2305.05843 | 2023-05-10T02:24:50Z | MoCA: Memory-Centric, Adaptive Execution for Multi-Tenant Deep Neural
Networks | [
"Seah Kim",
"Hasan Genc",
"Vadim Vadimovich Nikiforov",
"Krste Asanović",
"Borivoje Nikolić",
"Yakun Sophia Shao"
] | Driven by the wide adoption of deep neural networks (DNNs) across different
application domains, multi-tenancy execution, where multiple DNNs are deployed
simultaneously on the same hardware, has been proposed to satisfy the latency
requirements of different applications while improving the overall system
utilization. However, multi-tenancy execution could lead to undesired
system-level resource contention, causing quality-of-service (QoS) degradation
for latency-critical applications. To address this challenge, we propose MoCA,
an adaptive multi-tenancy system for DNN accelerators. Unlike existing
solutions that focus on compute resource partition, MoCA dynamically manages
shared memory resources of co-located applications to meet their QoS targets.
Specifically, MoCA leverages the regularities in both DNN operators and
accelerators to dynamically modulate memory access rates based on their latency
targets and user-defined priorities so that co-located applications get the
resources they demand without significantly starving their co-runners. We
demonstrate that MoCA improves the satisfaction rate of the service level
agreement (SLA) up to 3.9x (1.8x average), system throughput by 2.3x (1.7x
average), and fairness by 1.3x (1.2x average), compared to prior work. | [
"cs.DC",
"cs.AR",
"cs.LG"
] | false |
2305.05909 | 2023-05-10T05:29:47Z | Robust multi-agent coordination via evolutionary generation of auxiliary
adversarial attackers | [
"Lei Yuan",
"Zi-Qian Zhang",
"Ke Xue",
"Hao Yin",
"Feng Chen",
"Cong Guan",
"Li-He Li",
"Chao Qian",
"Yang Yu"
] | Cooperative multi-agent reinforcement learning (CMARL) has shown to be
promising for many real-world applications. Previous works mainly focus on
improving coordination ability via solving MARL-specific challenges (e.g.,
non-stationarity, credit assignment, scalability), but ignore the policy
perturbation issue when testing in a different environment. This issue hasn't
been considered in problem formulation or efficient algorithm design. To
address this issue, we firstly model the problem as a limited policy adversary
Dec-POMDP (LPA-Dec-POMDP), where some coordinators from a team might
accidentally and unpredictably encounter a limited number of malicious action
attacks, but the regular coordinators still strive for the intended goal. Then,
we propose Robust Multi-Agent Coordination via Evolutionary Generation of
Auxiliary Adversarial Attackers (ROMANCE), which enables the trained policy to
encounter diversified and strong auxiliary adversarial attacks during training,
thus achieving high robustness under various policy perturbations. Concretely,
to avoid the ego-system overfitting to a specific attacker, we maintain a set
of attackers, which is optimized to guarantee the attackers high attacking
quality and behavior diversity. The goal of quality is to minimize the
ego-system coordination effect, and a novel diversity regularizer based on
sparse action is applied to diversify the behaviors among attackers. The
ego-system is then paired with a population of attackers selected from the
maintained attacker set, and alternately trained against the constantly
evolving attackers. Extensive experiments on multiple scenarios from SMAC
indicate our ROMANCE provides comparable or better robustness and
generalization ability than other baselines. | [
"cs.MA",
"cs.LG",
"cs.NE"
] | false |
2305.05933 | 2023-05-10T07:05:43Z | Spectrum Breathing: Protecting Over-the-Air Federated Learning Against
Interference | [
"Zhanwei Wang",
"Kaibin Huang",
"Yonina C. Eldar"
] | Federated Learning (FL) is a widely embraced paradigm for distilling
artificial intelligence from distributed mobile data. However, the deployment
of FL in mobile networks can be compromised by exposure to interference from
neighboring cells or jammers. Existing interference mitigation techniques
require multi-cell cooperation or at least interference channel state
information, which is expensive in practice. On the other hand, power control
that treats interference as noise may not be effective due to limited power
budgets, and also that this mechanism can trigger countermeasures by
interference sources. As a practical approach for protecting FL against
interference, we propose Spectrum Breathing, which cascades stochastic-gradient
pruning and spread spectrum to suppress interference without bandwidth
expansion. The cost is higher learning latency by exploiting the graceful
degradation of learning speed due to pruning. We synchronize the two operations
such that their levels are controlled by the same parameter, Breathing Depth.
To optimally control the parameter, we develop a martingale-based approach to
convergence analysis of Over-the-Air FL with spectrum breathing, termed
AirBreathing FL. We show a performance tradeoff between gradient-pruning and
interference-induced error as regulated by the breathing depth. Given receive
SIR and model size, the optimization of the tradeoff yields two schemes for
controlling the breathing depth that can be either fixed or adaptive to
channels and the learning process. As shown by experiments, in scenarios where
traditional Over-the-Air FL fails to converge in the presence of strong
interference, AirBreahing FL with either fixed or adaptive breathing depth can
ensure convergence where the adaptive scheme achieves close-to-ideal
performance. | [
"cs.LG",
"cs.CR",
"cs.IT",
"math.IT"
] | false |
2305.05986 | 2023-05-10T08:52:07Z | Structural Hawkes Processes for Learning Causal Structure from
Discrete-Time Event Sequences | [
"Jie Qiao",
"Ruichu Cai",
"Siyu Wu",
"Yu Xiang",
"Keli Zhang",
"Zhifeng Hao"
] | Learning causal structure among event types from discrete-time event
sequences is a particularly important but challenging task. Existing methods,
such as the multivariate Hawkes processes based methods, mostly boil down to
learning the so-called Granger causality which assumes that the cause event
happens strictly prior to its effect event. Such an assumption is often
untenable beyond applications, especially when dealing with discrete-time event
sequences in low-resolution; and typical discrete Hawkes processes mainly
suffer from identifiability issues raised by the instantaneous effect, i.e.,
the causal relationship that occurred simultaneously due to the low-resolution
data will not be captured by Granger causality. In this work, we propose
Structure Hawkes Processes (SHPs) that leverage the instantaneous effect for
learning the causal structure among events type in discrete-time event
sequence. The proposed method is featured with the minorization-maximization of
the likelihood function and a sparse optimization scheme. Theoretical results
show that the instantaneous effect is a blessing rather than a curse, and the
causal structure is identifiable under the existence of the instantaneous
effect. Experiments on synthetic and real-world data verify the effectiveness
of the proposed method. | [
"cs.LG",
"cs.AI",
"stat.ME"
] | false |
2305.06000 | 2023-05-10T09:20:11Z | Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations | [
"Deqing Jiang",
"Justin Sirignano",
"Samuel N. Cohen"
] | Numerically solving high-dimensional partial differential equations (PDEs) is
a major challenge. Conventional methods, such as finite difference methods, are
unable to solve high-dimensional PDEs due to the curse-of-dimensionality. A
variety of deep learning methods have been recently developed to try and solve
high-dimensional PDEs by approximating the solution using a neural network. In
this paper, we prove global convergence for one of the commonly-used deep
learning algorithms for solving PDEs, the Deep Galerkin Method (DGM). DGM
trains a neural network approximator to solve the PDE using stochastic gradient
descent. We prove that, as the number of hidden units in the single-layer
network goes to infinity (i.e., in the ``wide network limit"), the trained
neural network converges to the solution of an infinite-dimensional linear
ordinary differential equation (ODE). The PDE residual of the limiting
approximator converges to zero as the training time $\rightarrow \infty$. Under
mild assumptions, this convergence also implies that the neural network
approximator converges to the solution of the PDE. A closely related class of
deep learning methods for PDEs is Physics Informed Neural Networks (PINNs).
Using the same mathematical techniques, we can prove a similar global
convergence result for the PINN neural network approximators. Both proofs
require analyzing a kernel function in the limit ODE governing the evolution of
the limit neural network approximator. A key technical challenge is that the
kernel function, which is a composition of the PDE operator and the neural
tangent kernel (NTK) operator, lacks a spectral gap, therefore requiring a
careful analysis of its properties. | [
"math.NA",
"cs.LG",
"cs.NA"
] | false |
2305.06178 | 2023-05-10T14:03:36Z | Sequence-Agnostic Multi-Object Navigation | [
"Nandiraju Gireesh",
"Ayush Agrawal",
"Ahana Datta",
"Snehasis Banerjee",
"Mohan Sridharan",
"Brojeshwar Bhowmick",
"Madhava Krishna"
] | The Multi-Object Navigation (MultiON) task requires a robot to localize an
instance (each) of multiple object classes. It is a fundamental task for an
assistive robot in a home or a factory. Existing methods for MultiON have
viewed this as a direct extension of Object Navigation (ON), the task of
localising an instance of one object class, and are pre-sequenced, i.e., the
sequence in which the object classes are to be explored is provided in advance.
This is a strong limitation in practical applications characterized by dynamic
changes. This paper describes a deep reinforcement learning framework for
sequence-agnostic MultiON based on an actor-critic architecture and a suitable
reward specification. Our framework leverages past experiences and seeks to
reward progress toward individual as well as multiple target object classes. We
use photo-realistic scenes from the Gibson benchmark dataset in the AI Habitat
3D simulation environment to experimentally show that our method performs
better than a pre-sequenced approach and a state of the art ON method extended
to MultiON. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2305.06230 | 2023-05-10T15:06:53Z | Penalized deep neural networks estimator with general loss functions
under weak dependence | [
"William Kengne",
"Modou Wade"
] | This paper carries out sparse-penalized deep neural networks predictors for
learning weakly dependent processes, with a broad class of loss functions. We
deal with a general framework that includes, regression estimation,
classification, times series prediction, $\cdots$ The $\psi$-weak dependence
structure is considered, and for the specific case of bounded observations,
$\theta_\infty$-coefficients are also used. In this case of
$\theta_\infty$-weakly dependent, a non asymptotic generalization bound within
the class of deep neural networks predictors is provided. For learning both
$\psi$ and $\theta_\infty$-weakly dependent processes, oracle inequalities for
the excess risk of the sparse-penalized deep neural networks estimators are
established. When the target function is sufficiently smooth, the convergence
rate of these excess risk is close to $\mathcal{O}(n^{-1/3})$. Some simulation
results are provided, and application to the forecast of the particulate matter
in the Vit\'{o}ria metropolitan area is also considered. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2305.06315 | 2023-05-10T17:05:55Z | NervePool: A Simplicial Pooling Layer | [
"Sarah McGuire",
"Elizabeth Munch",
"Matthew Hirn"
] | For deep learning problems on graph-structured data, pooling layers are
important for down sampling, reducing computational cost, and to minimize
overfitting. We define a pooling layer, NervePool, for data structured as
simplicial complexes, which are generalizations of graphs that include
higher-dimensional simplices beyond vertices and edges; this structure allows
for greater flexibility in modeling higher-order relationships. The proposed
simplicial coarsening scheme is built upon partitions of vertices, which allow
us to generate hierarchical representations of simplicial complexes, collapsing
information in a learned fashion. NervePool builds on the learned vertex
cluster assignments and extends to coarsening of higher dimensional simplices
in a deterministic fashion. While in practice, the pooling operations are
computed via a series of matrix operations, the topological motivation is a
set-theoretic construction based on unions of stars of simplices and the nerve
complex | [
"cs.CG",
"cs.LG",
"cs.NE",
"62R40, 05E45, 68T07, 68R10"
] | false |
2305.06447 | 2023-05-10T20:34:40Z | Dynamic Graph Representation Learning for Depression Screening with
Transformer | [
"Ai-Te Kuo",
"Haiquan Chen",
"Yu-Hsuan Kuo",
"Wei-Shinn Ku"
] | Early detection of mental disorder is crucial as it enables prompt
intervention and treatment, which can greatly improve outcomes for individuals
suffering from debilitating mental affliction. The recent proliferation of
mental health discussions on social media platforms presents research
opportunities to investigate mental health and potentially detect instances of
mental illness. However, existing depression detection methods are constrained
due to two major limitations: (1) the reliance on feature engineering and (2)
the lack of consideration for time-varying factors. Specifically, these methods
require extensive feature engineering and domain knowledge, which heavily rely
on the amount, quality, and type of user-generated content. Moreover, these
methods ignore the important impact of time-varying factors on depression
detection, such as the dynamics of linguistic patterns and interpersonal
interactive behaviors over time on social media (e.g., replies, mentions, and
quote-tweets). To tackle these limitations, we propose an early depression
detection framework, ContrastEgo treats each user as a dynamic time-evolving
attributed graph (ego-network) and leverages supervised contrastive learning to
maximize the agreement of users' representations at different scales while
minimizing the agreement of users' representations to differentiate between
depressed and control groups. ContrastEgo embraces four modules, (1)
constructing users' heterogeneous interactive graphs, (2) extracting the
representations of users' interaction snapshots using graph neural networks,
(3) modeling the sequences of snapshots using attention mechanism, and (4)
depression detection using contrastive learning. Extensive experiments on
Twitter data demonstrate that ContrastEgo significantly outperforms the
state-of-the-art methods in terms of all the effectiveness metrics in various
experimental settings. | [
"cs.LG",
"cs.IR",
"cs.SI"
] | false |
2305.06936 | 2023-05-10T15:00:05Z | An Option-Dependent Analysis of Regret Minimization Algorithms in
Finite-Horizon Semi-Markov Decision Processes | [
"Gianluca Drappo",
"Alberto Maria Metelli",
"Marcello Restelli"
] | A large variety of real-world Reinforcement Learning (RL) tasks is
characterized by a complex and heterogeneous structure that makes end-to-end
(or flat) approaches hardly applicable or even infeasible. Hierarchical
Reinforcement Learning (HRL) provides general solutions to address these
problems thanks to a convenient multi-level decomposition of the tasks, making
their solution accessible. Although often used in practice, few works provide
theoretical guarantees to justify this outcome effectively. Thus, it is not yet
clear when to prefer such approaches compared to standard flat ones. In this
work, we provide an option-dependent upper bound to the regret suffered by
regret minimization algorithms in finite-horizon problems. We illustrate that
the performance improvement derives from the planning horizon reduction induced
by the temporal abstraction enforced by the hierarchical structure. Then,
focusing on a sub-setting of HRL approaches, the options framework, we
highlight how the average duration of the available options affects the
planning horizon and, consequently, the regret itself. Finally, we relax the
assumption of having pre-trained options to show how in particular situations,
learning hierarchically from scratch could be preferable to using a standard
approach. | [
"cs.LG",
"cs.IT",
"math.IT"
] | false |
2305.10350 | 2023-05-10T14:48:03Z | Multiverse at the Edge: Interacting Real World and Digital Twins for
Wireless Beamforming | [
"Batool Salehi",
"Utku Demir",
"Debashri Roy",
"Suyash Pradhan",
"Jennifer Dy",
"Stratis Ioannidis",
"Kaushik Chowdhury"
] | Creating a digital world that closely mimics the real world with its many
complex interactions and outcomes is possible today through advanced emulation
software and ubiquitous computing power. Such a software-based emulation of an
entity that exists in the real world is called a 'digital twin'. In this paper,
we consider a twin of a wireless millimeter-wave band radio that is mounted on
a vehicle and show how it speeds up directional beam selection in mobile
environments. To achieve this, we go beyond instantiating a single twin and
propose the 'Multiverse' paradigm, with several possible digital twins
attempting to capture the real world at different levels of fidelity. Towards
this goal, this paper describes (i) a decision strategy at the vehicle that
determines which twin must be used given the computational and latency
limitations, and (ii) a self-learning scheme that uses the Multiverse-guided
beam outcomes to enhance DL-based decision-making in the real world over time.
Our work is distinguished from prior works as follows: First, we use a publicly
available RF dataset collected from an autonomous car for creating different
twins. Second, we present a framework with continuous interaction between the
real world and Multiverse of twins at the edge, as opposed to a one-time
emulation that is completed prior to actual deployment. Results reveal that
Multiverse offers up to 79.43% and 85.22% top-10 beam selection accuracy for
LOS and NLOS scenarios, respectively. Moreover, we observe 52.72-85.07%
improvement in beam selection time compared to 802.11ad standard. | [
"eess.SP",
"cs.LG",
"cs.NI"
] | false |
2305.10351 | 2023-05-10T19:26:58Z | BIOT: Cross-data Biosignal Learning in the Wild | [
"Chaoqi Yang",
"M. Brandon Westover",
"Jimeng Sun"
] | Biological signals, such as electroencephalograms (EEG), play a crucial role
in numerous clinical applications, exhibiting diverse data formats and quality
profiles. Current deep learning models for biosignals are typically specialized
for specific datasets and clinical settings, limiting their broader
applicability. Motivated by the success of large language models in text
processing, we explore the development of foundational models that are trained
from multiple data sources and can be fine-tuned on different downstream
biosignal tasks.
To overcome the unique challenges associated with biosignals of various
formats, such as mismatched channels, variable sample lengths, and prevalent
missing values, we propose a Biosignal Transformer (\method). The proposed
\method model can enable cross-data learning with mismatched channels, variable
lengths, and missing values by tokenizing diverse biosignals into unified
"biosignal sentences". Specifically, we tokenize each channel into fixed-length
segments containing local signal features, flattening them to form consistent
"sentences". Channel embeddings and {\em relative} position embeddings are
added to preserve spatio-temporal features.
The \method model is versatile and applicable to various biosignal learning
settings across different datasets, including joint pre-training for larger
models. Comprehensive evaluations on EEG, electrocardiogram (ECG), and human
activity sensory signals demonstrate that \method outperforms robust baselines
in common settings and facilitates learning across multiple datasets with
different formats. Use CHB-MIT seizure detection task as an example, our
vanilla \method model shows 3\% improvement over baselines in balanced
accuracy, and the pre-trained \method models (optimized from other data
sources) can further bring up to 4\% improvements. | [
"eess.SP",
"cs.AI",
"cs.LG"
] | false |
2305.16160 | 2023-05-10T14:00:50Z | Augmented Memory: Capitalizing on Experience Replay to Accelerate De
Novo Molecular Design | [
"Jeff Guo",
"Philippe Schwaller"
] | Sample efficiency is a fundamental challenge in de novo molecular design.
Ideally, molecular generative models should learn to satisfy a desired
objective under minimal oracle evaluations (computational prediction or wet-lab
experiment). This problem becomes more apparent when using oracles that can
provide increased predictive accuracy but impose a significant cost.
Consequently, these oracles cannot be directly optimized under a practical
budget. Molecular generative models have shown remarkable sample efficiency
when coupled with reinforcement learning, as demonstrated in the Practical
Molecular Optimization (PMO) benchmark. Here, we propose a novel algorithm
called Augmented Memory that combines data augmentation with experience replay.
We show that scores obtained from oracle calls can be reused to update the
model multiple times. We compare Augmented Memory to previously proposed
algorithms and show significantly enhanced sample efficiency in an exploitation
task and a drug discovery case study requiring both exploration and
exploitation. Our method achieves a new state-of-the-art in the PMO benchmark
which enforces a computational budget, outperforming the previous best
performing method on 19/23 tasks. | [
"q-bio.BM",
"cs.LG",
"q-bio.QM"
] | false |
2305.06082 | 2023-05-10T12:07:48Z | Best Arm Identification in Bandits with Limited Precision Sampling | [
"Kota Srinivas Reddy",
"P. N. Karthik",
"Nikhil Karamchandani",
"Jayakrishnan Nair"
] | We study best arm identification in a variant of the multi-armed bandit
problem where the learner has limited precision in arm selection. The learner
can only sample arms via certain exploration bundles, which we refer to as
boxes. In particular, at each sampling epoch, the learner selects a box, which
in turn causes an arm to get pulled as per a box-specific probability
distribution. The pulled arm and its instantaneous reward are revealed to the
learner, whose goal is to find the best arm by minimising the expected stopping
time, subject to an upper bound on the error probability. We present an
asymptotic lower bound on the expected stopping time, which holds as the error
probability vanishes. We show that the optimal allocation suggested by the
lower bound is, in general, non-unique and therefore challenging to track. We
propose a modified tracking-based algorithm to handle non-unique optimal
allocations, and demonstrate that it is asymptotically optimal. We also present
non-asymptotic lower and upper bounds on the stopping time in the simpler
setting when the arms accessible from one box do not overlap with those of
others. | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2305.06540 | 2023-05-11T03:08:48Z | Inter-frame Accelerate Attack against Video Interpolation Models | [
"Junpei Liao",
"Zhikai Chen",
"Liang Yi",
"Wenyuan Yang",
"Baoyuan Wu",
"Xiaochun Cao"
] | Deep learning based video frame interpolation (VIF) method, aiming to
synthesis the intermediate frames to enhance video quality, have been highly
developed in the past few years. This paper investigates the adversarial
robustness of VIF models. We apply adversarial attacks to VIF models and find
that the VIF models are very vulnerable to adversarial examples. To improve
attack efficiency, we suggest to make full use of the property of video frame
interpolation task. The intuition is that the gap between adjacent frames would
be small, leading to the corresponding adversarial perturbations being similar
as well. Then we propose a novel attack method named Inter-frame Accelerate
Attack (IAA) that initializes the perturbation as the perturbation for the
previous adjacent frame and reduces the number of attack iterations. It is
shown that our method can improve attack efficiency greatly while achieving
comparable attack performance with traditional methods. Besides, we also extend
our method to video recognition models which are higher level vision tasks and
achieves great attack efficiency. | [
"cs.CV"
] | false |
2305.06553 | 2023-05-11T04:05:30Z | WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition
on Robust Layout Segmentation in Corporate Documents | [
"Mingliang Zhang",
"Zhen Cao",
"Juntao Liu",
"Liqiang Niu",
"Fandong Meng",
"Jie Zhou"
] | In this paper, we introduce WeLayout, a novel system for segmenting the
layout of corporate documents, which stands for WeChat Layout Analysis System.
Our approach utilizes a sophisticated ensemble of DINO and YOLO models,
specifically developed for the ICDAR 2023 Competition on Robust Layout
Segmentation. Our method significantly surpasses the baseline, securing a top
position on the leaderboard with a mAP of 70.0. To achieve this performance, we
concentrated on enhancing various aspects of the task, such as dataset
augmentation, model architecture, bounding box refinement, and model ensemble
techniques. Additionally, we trained the data separately for each document
category to ensure a higher mean submission score. We also developed an
algorithm for cell matching to further improve our performance. To identify the
optimal weights and IoU thresholds for our model ensemble, we employed a
Bayesian optimization algorithm called the Tree-Structured Parzen Estimator.
Our approach effectively demonstrates the benefits of combining query-based and
anchor-free models for achieving robust layout segmentation in corporate
documents. | [
"cs.CV"
] | false |
2305.06558 | 2023-05-11T04:33:08Z | Segment and Track Anything | [
"Yangming Cheng",
"Liulei Li",
"Yuanyou Xu",
"Xiaodi Li",
"Zongxin Yang",
"Wenguan Wang",
"Yi Yang"
] | This report presents a framework called Segment And Track Anything (SAMTrack)
that allows users to precisely and effectively segment and track any object in
a video. Additionally, SAM-Track employs multimodal interaction methods that
enable users to select multiple objects in videos for tracking, corresponding
to their specific requirements. These interaction methods comprise click,
stroke, and text, each possessing unique benefits and capable of being employed
in combination. As a result, SAM-Track can be used across an array of fields,
ranging from drone technology, autonomous driving, medical imaging, augmented
reality, to biological analysis. SAM-Track amalgamates Segment Anything Model
(SAM), an interactive key-frame segmentation model, with our proposed AOT-based
tracking model (DeAOT), which secured 1st place in four tracks of the VOT 2022
challenge, to facilitate object tracking in video. In addition, SAM-Track
incorporates Grounding-DINO, which enables the framework to support text-based
interaction. We have demonstrated the remarkable capabilities of SAM-Track on
DAVIS-2016 Val (92.0%), DAVIS-2017 Test (79.2%)and its practicability in
diverse applications. The project page is available at:
https://github.com/z-x-yang/Segment-and-Track-Anything. | [
"cs.CV"
] | false |
2305.06559 | 2023-05-11T04:34:10Z | Patch-wise Mixed-Precision Quantization of Vision Transformer | [
"Junrui Xiao",
"Zhikai Li",
"Lianwei Yang",
"Qingyi Gu"
] | As emerging hardware begins to support mixed bit-width arithmetic
computation, mixed-precision quantization is widely used to reduce the
complexity of neural networks. However, Vision Transformers (ViTs) require
complex self-attention computation to guarantee the learning of powerful
feature representations, which makes mixed-precision quantization of ViTs still
challenging. In this paper, we propose a novel patch-wise mixed-precision
quantization (PMQ) for efficient inference of ViTs. Specifically, we design a
lightweight global metric, which is faster than existing methods, to measure
the sensitivity of each component in ViTs to quantization errors. Moreover, we
also introduce a pareto frontier approach to automatically allocate the optimal
bit-precision according to the sensitivity. To further reduce the computational
complexity of self-attention in inference stage, we propose a patch-wise module
to reallocate bit-width of patches in each layer. Extensive experiments on the
ImageNet dataset shows that our method greatly reduces the search cost and
facilitates the application of mixed-precision quantization to ViTs. | [
"cs.CV"
] | false |
2305.06582 | 2023-05-11T05:41:23Z | Exploiting Fine-Grained DCT Representations for Hiding Image-Level
Messages within JPEG Images | [
"Junxue Yang",
"Xin Liao"
] | Unlike hiding bit-level messages, hiding image-level messages is more
challenging, which requires large capacity, high imperceptibility, and high
security. Although recent advances in hiding image-level messages have been
remarkable, existing schemes are limited to lossless spatial images as covers
and cannot be directly applied to JPEG images, the ubiquitous lossy format
images in daily life. The difficulties of migration are caused by the lack of
targeted design and the loss of details due to lossy decompression and
re-compression. Considering that taking DCT densely on $8\times8$ image patches
is the core of the JPEG compression standard, we design a novel model called
\textsf{EFDR}, which can comprehensively \underline{E}xploit
\underline{F}ine-grained \underline{D}CT \underline{R}epresentations and embed
the secret image into quantized DCT coefficients to avoid the lossy process.
Specifically, we transform the JPEG cover image and hidden secret image into
fine-grained DCT representations that compact the frequency and are associated
with the inter-block and intra-block correlations. Subsequently, the
fine-grained DCT representations are further enhanced by a sub-band features
enhancement module. Afterward, a transformer-based invertibility module is
designed to fuse enhanced sub-band features. Such a design enables a
fine-grained self-attention on each sub-band and captures long-range
dependencies while maintaining excellent reversibility for hiding and recovery.
To our best knowledge, this is the first attempt to embed a color image of
equal size in a color JPEG image. Extensive experiments demonstrate the
effectiveness of our \textsf{EFDR} with superior performance. | [
"cs.CV"
] | false |
2305.06611 | 2023-05-11T07:14:23Z | Hyperbolic Deep Learning in Computer Vision: A Survey | [
"Pascal Mettes",
"Mina Ghadimi Atigh",
"Martin Keller-Ressel",
"Jeffrey Gu",
"Serena Yeung"
] | Deep representation learning is a ubiquitous part of modern computer vision.
While Euclidean space has been the de facto standard manifold for learning
visual representations, hyperbolic space has recently gained rapid traction for
learning in computer vision. Specifically, hyperbolic learning has shown a
strong potential to embed hierarchical structures, learn from limited samples,
quantify uncertainty, add robustness, limit error severity, and more. In this
paper, we provide a categorization and in-depth overview of current literature
on hyperbolic learning for computer vision. We research both supervised and
unsupervised literature and identify three main research themes in each
direction. We outline how hyperbolic learning is performed in all themes and
discuss the main research problems that benefit from current advances in
hyperbolic learning for computer vision. Moreover, we provide a high-level
intuition behind hyperbolic geometry and outline open research questions to
further advance research in this direction. | [
"cs.CV"
] | false |
2305.06621 | 2023-05-11T07:37:15Z | PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer | [
"Honghui Yang",
"Wenxiao Wang",
"Minghao Chen",
"Binbin Lin",
"Tong He",
"Hua Chen",
"Xiaofei He",
"Wanli Ouyang"
] | Recent Transformer-based 3D object detectors learn point cloud features
either from point- or voxel-based representations. However, the former requires
time-consuming sampling while the latter introduces quantization errors. In
this paper, we present a novel Point-Voxel Transformer for single-stage 3D
detection (PVT-SSD) that takes advantage of these two representations.
Specifically, we first use voxel-based sparse convolutions for efficient
feature encoding. Then, we propose a Point-Voxel Transformer (PVT) module that
obtains long-range contexts in a cheap manner from voxels while attaining
accurate positions from points. The key to associating the two different
representations is our introduced input-dependent Query Initialization module,
which could efficiently generate reference points and content queries. Then,
PVT adaptively fuses long-range contextual and local geometric information
around reference points into content queries. Further, to quickly find the
neighboring points of reference points, we design the Virtual Range Image
module, which generalizes the native range image to multi-sensor and
multi-frame. The experiments on several autonomous driving benchmarks verify
the effectiveness and efficiency of the proposed method. Code will be available
at https://github.com/Nightmare-n/PVT-SSD. | [
"cs.CV"
] | false |
2305.06720 | 2023-05-11T10:55:34Z | Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and
Beyond | [
"Zhu Liu",
"Jinyuan Liu",
"Guanyao Wu",
"Long Ma",
"Xin Fan",
"Risheng Liu"
] | Recently, multi-modality scene perception tasks, e.g., image fusion and scene
understanding, have attracted widespread attention for intelligent vision
systems. However, early efforts always consider boosting a single task
unilaterally and neglecting others, seldom investigating their underlying
connections for joint promotion. To overcome these limitations, we establish
the hierarchical dual tasks-driven deep model to bridge these tasks.
Concretely, we firstly construct an image fusion module to fuse complementary
characteristics and cascade dual task-related modules, including a
discriminator for visual effects and a semantic network for feature
measurement. We provide a bi-level perspective to formulate image fusion and
follow-up downstream tasks. To incorporate distinct task-related responses for
image fusion, we consider image fusion as a primary goal and dual modules as
learnable constraints. Furthermore, we develop an efficient first-order
approximation to compute corresponding gradients and present dynamic weighted
aggregation to balance the gradients for fusion learning. Extensive experiments
demonstrate the superiority of our method, which not only produces visually
pleasant fused results but also realizes significant promotion for detection
and segmentation than the state-of-the-art approaches. | [
"cs.CV"
] | false |
2305.06799 | 2023-05-11T13:41:13Z | GCFAgg: Global and Cross-view Feature Aggregation for Multi-view
Clustering | [
"Weiqing Yan",
"Yuanyang Zhang",
"Chenlei Lv",
"Chang Tang",
"Guanghui Yue",
"Liang Liao",
"Weisi Lin"
] | Multi-view clustering can partition data samples into their categories by
learning a consensus representation in unsupervised way and has received more
and more attention in recent years. However, most existing deep clustering
methods learn consensus representation or view-specific representations from
multiple views via view-wise aggregation way, where they ignore structure
relationship of all samples. In this paper, we propose a novel multi-view
clustering network to address these problems, called Global and Cross-view
Feature Aggregation for Multi-View Clustering (GCFAggMVC). Specifically, the
consensus data presentation from multiple views is obtained via cross-sample
and cross-view feature aggregation, which fully explores the complementary
ofsimilar samples. Moreover, we align the consensus representation and the
view-specific representation by the structure-guided contrastive learning
module, which makes the view-specific representations from different samples
with high structure relationship similar. The proposed module is a flexible
multi-view data representation module, which can be also embedded to the
incomplete multi-view data clustering task via plugging our module into other
frameworks. Extensive experiments show that the proposed method achieves
excellent performance in both complete multi-view data clustering tasks and
incomplete multi-view data clustering tasks. | [
"cs.CV"
] | false |
2305.06820 | 2023-05-11T14:13:37Z | DeepSTEP -- Deep Learning-Based Spatio-Temporal End-To-End Perception
for Autonomous Vehicles | [
"Sebastian Huch",
"Florian Sauerbeck",
"Johannes Betz"
] | Autonomous vehicles demand high accuracy and robustness of perception
algorithms. To develop efficient and scalable perception algorithms, the
maximum information should be extracted from the available sensor data. In this
work, we present our concept for an end-to-end perception architecture, named
DeepSTEP. The deep learning-based architecture processes raw sensor data from
the camera, LiDAR, and RaDAR, and combines the extracted data in a deep fusion
network. The output of this deep fusion network is a shared feature space,
which is used by perception head networks to fulfill several perception tasks,
such as object detection or local mapping. DeepSTEP incorporates multiple ideas
to advance state of the art: First, combining detection and localization into a
single pipeline allows for efficient processing to reduce computational
overhead and further improves overall performance. Second, the architecture
leverages the temporal domain by using a self-attention mechanism that focuses
on the most important features. We believe that our concept of DeepSTEP will
advance the development of end-to-end perception systems. The network will be
deployed on our research vehicle, which will be used as a platform for data
collection, real-world testing, and validation. In conclusion, DeepSTEP
represents a significant advancement in the field of perception for autonomous
vehicles. The architecture's end-to-end design, time-aware attention mechanism,
and integration of multiple perception tasks make it a promising solution for
real-world deployment. This research is a work in progress and presents the
first concept of establishing a novel perception pipeline. | [
"cs.CV"
] | false |
2305.06845 | 2023-05-11T14:40:20Z | Detection and Classification of Pole-like Landmarks for Domain-invariant
3D Point Cloud Map Matching | [
"Sun Yifei",
"Li Dingrui",
"Ye Minying",
"Tanaka Kanji"
] | In 3D point cloud-based visual self-localization, pole landmarks have a great
potential as landmarks for accurate and reliable localization due to their
long-term stability under seasonal and weather changes. In this study, we aim
to explore the use of recently developed deep learning models for pole
classification in the context of pole landmark-based self-localization.
Specifically, the proposed scheme consists of two main modules: pole map
matching and pole class matching. In the former module, local pole map is
constructed and its configuration is compared against a precomputed global pole
map. An efficient RANSAC map matching is employed to achieve a good tradeoff
between computational efficiency and accuracy. In the latter pole class
matching module, the local and global poles paired by the RANSAC map-matching
are further compared by means of pole attribute class. To this end, a
predefined set of pseudo pole classes is learned via k-means clustering in a
self-supervised manner. Experiments using publicly available NCLT dataset
showed that the pole-like landmark classification method has an improved effect
on the visual self-localization system compared with the baseline method. | [
"cs.CV"
] | false |
2305.06923 | 2023-05-11T16:05:03Z | EAML: Ensemble Self-Attention-based Mutual Learning Network for Document
Image Classification | [
"Souhail Bakkali",
"Ziheng Ming",
"Mickael Coustaty",
"Marçal Rusiñol"
] | In the recent past, complex deep neural networks have received huge interest
in various document understanding tasks such as document image classification
and document retrieval. As many document types have a distinct visual style,
learning only visual features with deep CNNs to classify document images have
encountered the problem of low inter-class discrimination, and high intra-class
structural variations between its categories. In parallel, text-level
understanding jointly learned with the corresponding visual properties within a
given document image has considerably improved the classification performance
in terms of accuracy. In this paper, we design a self-attention-based fusion
module that serves as a block in our ensemble trainable network. It allows to
simultaneously learn the discriminant features of image and text modalities
throughout the training stage. Besides, we encourage mutual learning by
transferring the positive knowledge between image and text modalities during
the training stage. This constraint is realized by adding a
truncated-Kullback-Leibler divergence loss Tr-KLD-Reg as a new regularization
term, to the conventional supervised setting. To the best of our knowledge,
this is the first time to leverage a mutual learning approach along with a
self-attention-based fusion module to perform document image classification.
The experimental results illustrate the effectiveness of our approach in terms
of accuracy for the single-modal and multi-modal modalities. Thus, the proposed
ensemble self-attention-based mutual learning model outperforms the
state-of-the-art classification results based on the benchmark RVL-CDIP and
Tobacco-3482 datasets. | [
"cs.CV"
] | false |
2305.06968 | 2023-05-11T16:49:19Z | HuManiFlow: Ancestor-Conditioned Normalising Flows on SO(3) Manifolds
for Human Pose and Shape Distribution Estimation | [
"Akash Sengupta",
"Ignas Budvytis",
"Roberto Cipolla"
] | Monocular 3D human pose and shape estimation is an ill-posed problem since
multiple 3D solutions can explain a 2D image of a subject. Recent approaches
predict a probability distribution over plausible 3D pose and shape parameters
conditioned on the image. We show that these approaches exhibit a trade-off
between three key properties: (i) accuracy - the likelihood of the ground-truth
3D solution under the predicted distribution, (ii) sample-input consistency -
the extent to which 3D samples from the predicted distribution match the
visible 2D image evidence, and (iii) sample diversity - the range of plausible
3D solutions modelled by the predicted distribution. Our method, HuManiFlow,
predicts simultaneously accurate, consistent and diverse distributions. We use
the human kinematic tree to factorise full body pose into ancestor-conditioned
per-body-part pose distributions in an autoregressive manner. Per-body-part
distributions are implemented using normalising flows that respect the manifold
structure of SO(3), the Lie group of per-body-part poses. We show that
ill-posed, but ubiquitous, 3D point estimate losses reduce sample diversity,
and employ only probabilistic training losses. Code is available at:
https://github.com/akashsengupta1997/HuManiFlow. | [
"cs.CV"
] | false |
2305.06973 | 2023-05-11T16:56:26Z | FreePoint: Unsupervised Point Cloud Instance Segmentation | [
"Zhikai Zhang",
"Jian Ding",
"Li Jiang",
"Dengxin Dai",
"Gui-Song Xia"
] | Instance segmentation of point clouds is a crucial task in 3D field with
numerous applications that involve localizing and segmenting objects in a
scene. However, achieving satisfactory results requires a large number of
manual annotations, which is a time-consuming and expensive process. To
alleviate dependency on annotations, we propose a method, called FreePoint, for
underexplored unsupervised class-agnostic instance segmentation on point
clouds. In detail, we represent the point features by combining coordinates,
colors, normals, and self-supervised deep features. Based on the point
features, we perform a multicut algorithm to segment point clouds into coarse
instance masks as pseudo labels, which are used to train a point cloud instance
segmentation model. To alleviate the inaccuracy of coarse masks during
training, we propose a weakly-supervised training strategy and corresponding
loss. Our work can also serve as an unsupervised pre-training pretext for
supervised semantic instance segmentation with limited annotations. For
class-agnostic instance segmentation on point clouds, FreePoint largely fills
the gap with its fully-supervised counterpart based on the state-of-the-art
instance segmentation model Mask3D and even surpasses some previous
fully-supervised methods. When serving as a pretext task and fine-tuning on
S3DIS, FreePoint outperforms training from scratch by 5.8% AP with only 10%
mask annotations. | [
"cs.CV"
] | false |
2305.07014 | 2023-05-11T17:55:11Z | Virtual Occlusions Through Implicit Depth | [
"Jamie Watson",
"Mohamed Sayed",
"Zawar Qureshi",
"Gabriel J. Brostow",
"Sara Vicente",
"Oisin Mac Aodha",
"Michael Firman"
] | For augmented reality (AR), it is important that virtual assets appear to
`sit among' real world objects. The virtual element should variously occlude
and be occluded by real matter, based on a plausible depth ordering. This
occlusion should be consistent over time as the viewer's camera moves.
Unfortunately, small mistakes in the estimated scene depth can ruin the
downstream occlusion mask, and thereby the AR illusion. Especially in real-time
settings, depths inferred near boundaries or across time can be inconsistent.
In this paper, we challenge the need for depth-regression as an intermediate
step.
We instead propose an implicit model for depth and use that to predict the
occlusion mask directly. The inputs to our network are one or more color
images, plus the known depths of any virtual geometry. We show how our
occlusion predictions are more accurate and more temporally stable than
predictions derived from traditional depth-estimation models. We obtain
state-of-the-art occlusion results on the challenging ScanNetv2 dataset and
superior qualitative results on real scenes. | [
"cs.CV"
] | false |
2305.07021 | 2023-05-11T17:58:17Z | Simple Token-Level Confidence Improves Caption Correctness | [
"Suzanne Petryk",
"Spencer Whitehead",
"Joseph E. Gonzalez",
"Trevor Darrell",
"Anna Rohrbach",
"Marcus Rohrbach"
] | The ability to judge whether a caption correctly describes an image is a
critical part of vision-language understanding. However, state-of-the-art
models often misinterpret the correctness of fine-grained details, leading to
errors in outputs such as hallucinating objects in generated captions or poor
compositional reasoning. In this work, we explore Token-Level Confidence, or
TLC, as a simple yet surprisingly effective method to assess caption
correctness. Specifically, we fine-tune a vision-language model on image
captioning, input an image and proposed caption to the model, and aggregate
either algebraic or learned token confidences over words or sequences to
estimate image-caption consistency. Compared to sequence-level scores from
pretrained models, TLC with algebraic confidence measures achieves a relative
improvement in accuracy by 10% on verb understanding in SVO-Probes and
outperforms prior state-of-the-art in image and group scores for compositional
reasoning in Winoground by a relative 37% and 9%, respectively. When training
data are available, a learned confidence estimator provides further improved
performance, reducing object hallucination rates in MS COCO Captions by a
relative 30% over the original model and setting a new state-of-the-art. | [
"cs.CV"
] | true |
2305.07024 | 2023-05-11T17:58:37Z | SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input
Views | [
"Weihao Cheng",
"Yan-Pei Cao",
"Ying Shan"
] | We study to generate novel views of indoor scenes given sparse input views.
The challenge is to achieve both photorealism and view consistency. We present
SparseGNV: a learning framework that incorporates 3D structures and image
generative models to generate novel views with three modules. The first module
builds a neural point cloud as underlying geometry, providing contextual
information and guidance for the target novel view. The second module utilizes
a transformer-based network to map the scene context and the guidance into a
shared latent space and autoregressively decodes the target view in the form of
discrete image tokens. The third module reconstructs the tokens into the image
of the target view. SparseGNV is trained across a large indoor scene dataset to
learn generalizable priors. Once trained, it can efficiently generate novel
views of an unseen indoor scene in a feed-forward manner. We evaluate SparseGNV
on both real-world and synthetic indoor scenes and demonstrate that it
outperforms state-of-the-art methods based on either neural radiance fields or
conditional image generation. | [
"cs.CV"
] | false |
2305.07027 | 2023-05-11T17:59:41Z | EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
Attention | [
"Xinyu Liu",
"Houwen Peng",
"Ningxin Zheng",
"Yuqing Yang",
"Han Hu",
"Yixuan Yuan"
] | Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find that the speed of existing transformer models is commonly
bounded by memory inefficient operations, especially the tensor reshaping and
element-wise functions in MHSA. Therefore, we design a new building block with
a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN
layers, which improves memory efficiency while enhancing channel communication.
Moreover, we discover that the attention maps share high similarities across
heads, leading to computational redundancy. To address this, we present a
cascaded group attention module feeding attention heads with different splits
of the full feature, which not only saves computation cost but also improves
attention diversity. Comprehensive experiments demonstrate EfficientViT
outperforms existing efficient models, striking a good trade-off between speed
and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by
1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia
V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient
model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while
running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX
format. Code and models are available at
https://github.com/microsoft/Cream/tree/main/EfficientViT. | [
"cs.CV"
] | true |
2305.07131 | 2023-05-11T20:43:50Z | Combining OCR Models for Reading Early Modern Printed Books | [
"Mathias Seuret",
"Janne van der Loop",
"Nikolaus Weichselbaumer",
"Martin Mayr",
"Janina Molnar",
"Tatjana Hass",
"Florian Kordon",
"Anguelos Nicolau",
"Vincent Christlein"
] | In this paper, we investigate the usage of fine-grained font recognition on
OCR for books printed from the 15th to the 18th century. We used a newly
created dataset for OCR of early printed books for which fonts are labeled with
bounding boxes. We know not only the font group used for each character, but
the locations of font changes as well. In books of this period, we frequently
find font group changes mid-line or even mid-word that indicate changes in
language. We consider 8 different font groups present in our corpus and
investigate 13 different subsets: the whole dataset and text lines with a
single font, multiple fonts, Roman fonts, Gothic fonts, and each of the
considered fonts, respectively. We show that OCR performance is strongly
impacted by font style and that selecting fine-tuned models with font group
recognition has a very positive impact on the results. Moreover, we developed a
system using local font group recognition in order to combine the output of
multiple font recognition models, and show that while slower, this approach
performs better not only on text lines composed of multiple fonts but on the
ones containing a single font only as well. | [
"cs.CV"
] | false |
2305.06511 | 2023-05-11T01:24:32Z | ParamNet: A Parameter-variable Network for Fast Stain Normalization | [
"Hongtao Kang",
"Die Luo",
"Li Chen",
"Junbo Hu",
"Shenghua Cheng",
"Tingwei Quan",
"Shaoqun Zeng",
"Xiuli Liu"
] | In practice, digital pathology images are often affected by various factors,
resulting in very large differences in color and brightness. Stain
normalization can effectively reduce the differences in color and brightness of
digital pathology images, thus improving the performance of computer-aided
diagnostic systems. Conventional stain normalization methods rely on one or
several reference images, but one or several images are difficult to represent
the entire dataset. Although learning-based stain normalization methods are a
general approach, they use complex deep networks, which not only greatly reduce
computational efficiency, but also risk introducing artifacts. StainNet is a
fast and robust stain normalization network, but it has not a sufficient
capability for complex stain normalization due to its too simple network
structure. In this study, we proposed a parameter-variable stain normalization
network, ParamNet. ParamNet contains a parameter prediction sub-network and a
color mapping sub-network, where the parameter prediction sub-network can
automatically determine the appropriate parameters for the color mapping
sub-network according to each input image. The feature of parameter variable
ensures that our network has a sufficient capability for various stain
normalization tasks. The color mapping sub-network is a fully 1x1 convolutional
network with a total of 59 variable parameters, which allows our network to be
extremely computationally efficient and does not introduce artifacts. The
results on cytopathology and histopathology datasets show that our ParamNet
outperforms state-of-the-art methods and can effectively improve the
generalization of classifiers on pathology diagnosis tasks. The code has been
available at https://github.com/khtao/ParamNet. | [
"eess.IV",
"cs.CV"
] | false |
2305.06525 | 2023-05-11T02:05:30Z | Pyramid Texture Filtering | [
"Qing Zhang",
"Hao Jiang",
"Yongwei Nie",
"Wei-Shi Zheng"
] | We present a simple but effective technique to smooth out textures while
preserving the prominent structures. Our method is built upon a key observation
-- the coarsest level in a Gaussian pyramid often naturally eliminates textures
and summarizes the main image structures. This inspires our central idea for
texture filtering, which is to progressively upsample the very low-resolution
coarsest Gaussian pyramid level to a full-resolution texture smoothing result
with well-preserved structures, under the guidance of each fine-scale Gaussian
pyramid level and its associated Laplacian pyramid level. We show that our
approach is effective to separate structure from texture of different scales,
local contrasts, and forms, without degrading structures or introducing visual
artifacts. We also demonstrate the applicability of our method on various
applications including detail enhancement, image abstraction, HDR tone mapping,
inverse halftoning, and LDR image enhancement. | [
"cs.CV",
"cs.GR"
] | false |
2305.06565 | 2023-05-11T04:49:37Z | Realization RGBD Image Stylization | [
"Bhavya Sehgal",
"Vaishnavi Mendu",
"Aparna Mendu"
] | This research paper explores the application of style transfer in computer
vision using RGB images and their corresponding depth maps. We propose a novel
method that incorporates the depth map and a heatmap of the RGB image to
generate more realistic style transfer results. We compare our method to the
traditional neural style transfer approach and find that our method outperforms
it in terms of producing more realistic color and style. The proposed method
can be applied to various computer vision applications, such as image editing
and virtual reality, to improve the realism of generated images. Overall, our
findings demonstrate the potential of incorporating depth information and
heatmap of RGB images in style transfer for more realistic results. | [
"cs.CV",
"eess.IV"
] | false |
2305.06786 | 2023-05-11T13:21:29Z | ReMark: Receptive Field based Spatial WaterMark Embedding Optimization
using Deep Network | [
"Natan Semyonov",
"Rami Puzis",
"Asaf Shabtai",
"Gilad Katz"
] | Watermarking is one of the most important copyright protection tools for
digital media. The most challenging type of watermarking is the imperceptible
one, which embeds identifying information in the data while retaining the
latter's original quality. To fulfill its purpose, watermarks need to withstand
various distortions whose goal is to damage their integrity. In this study, we
investigate a novel deep learning-based architecture for embedding
imperceptible watermarks. The key insight guiding our architecture design is
the need to correlate the dimensions of our watermarks with the sizes of
receptive fields (RF) of modules of our architecture. This adaptation makes our
watermarks more robust, while also enabling us to generate them in a way that
better maintains image quality. Extensive evaluations on a wide variety of
distortions show that the proposed method is robust against most common
distortions on watermarks including collusive distortion. | [
"cs.CV",
"eess.IV"
] | false |
2305.06809 | 2023-05-11T14:03:26Z | Collection Space Navigator: An Interactive Visualization Interface for
Multidimensional Datasets | [
"Tillmann Ohm",
"Mar Canet Solà",
"Andres Karjus",
"Maximilian Schich"
] | We introduce the Collection Space Navigator (CSN), a browser-based
visualization tool to explore, research, and curate large collections of visual
digital artifacts that are associated with multidimensional data, such as
vector embeddings or tables of metadata. Media objects such as images are often
encoded as numerical vectors, for e.g. based on metadata or using machine
learning to embed image information. Yet, while such procedures are widespread
for a range of applications, it remains a challenge to explore, analyze, and
understand the resulting multidimensional spaces in a more comprehensive
manner. Dimensionality reduction techniques such as t-SNE or UMAP often serve
to project high-dimensional data into low dimensional visualizations, yet
require interpretation themselves as the remaining dimensions are typically
abstract. Here, the Collection Space Navigator provides a customizable
interface that combines two-dimensional projections with a set of configurable
multidimensional filters. As a result, the user is able to view and investigate
collections, by zooming and scaling, by transforming between projections, by
filtering dimensions via range sliders, and advanced text filters. Insights
that are gained during the interaction can be fed back into the original data
via ad hoc exports of filtered metadata and projections. This paper comes with
a functional showcase demo using a large digitized collection of classical
Western art. The Collection Space Navigator is open source. Users can
reconfigure the interface to fit their own data and research needs, including
projections and filter controls. The CSN is ready to serve a broad community. | [
"cs.CV",
"cs.HC"
] | false |
2305.06813 | 2023-05-11T14:09:05Z | Generation of Structurally Realistic Retinal Fundus Images with
Diffusion Models | [
"Sojung Go",
"Younghoon Ji",
"Sang Jun Park",
"Soochahn Lee"
] | We introduce a new technique for generating retinal fundus images that have
anatomically accurate vascular structures, using diffusion models. We generate
artery/vein masks to create the vascular structure, which we then condition to
produce retinal fundus images. The proposed method can generate high-quality
images with more realistic vascular structures and can create a diverse range
of images based on the strengths of the diffusion model. We present
quantitative evaluations that demonstrate the performance improvement using our
method for data augmentation on vessel segmentation and artery/vein
classification. We also present Turing test results by clinical experts,
showing that our generated images are difficult to distinguish with real
images. We believe that our method can be applied to construct stand-alone
datasets that are irrelevant of patient privacy. | [
"eess.IV",
"cs.CV"
] | false |
2305.06842 | 2023-05-11T14:38:27Z | Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network | [
"P. Deivendran",
"P. Suresh Babu",
"G. Malathi",
"K. Anbazhagan",
"R. Senthil Kumar"
] | Human communication is the vocal and non verbal signal to communicate with
others. Human expression is a significant biometric object in picture and
record databases of surveillance systems. Face appreciation has a serious role
in biometric methods and is good-looking for plentiful applications, including
visual scrutiny and security. Facial expressions are a form of nonverbal
communication; recognizing them helps improve the human machine interaction.
This paper proposes an idea for face and enlightenment invariant credit of
facial expressions by the images. In order on, the person's face can be
computed. Face expression is used in CNN classifier to categorize the acquired
picture into different emotion categories. It is a deep, feed-forward
artificial neural network. Outcome surpasses human presentation and shows poses
alternate performance. Varying lighting conditions can influence the fitting
process and reduce recognition precision. Results illustrate that dependable
facial appearance credited with changing lighting conditions for separating
reasonable facial terminology display emotions is an efficient representation
of clean and assorted moving expressions. This process can also manage the
proportions of dissimilar basic affecting expressions of those mixed jointly to
produce sensible emotional facial expressions. Our system contains a
pre-defined data set, which was residential by a statistics scientist and
includes all pure and varied expressions. On average, a data set has achieved
92.4% exact validation of the expressions synthesized by our technique. These
facial expressions are compared through the pre-defined data-position inside
our system. If it recognizes the person in an abnormal condition, an alert will
be passed to the nearby hospital/doctor seeing that a message. | [
"cs.CV",
"cs.AI"
] | false |
2305.06963 | 2023-05-11T16:42:24Z | Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image
Classification Using Transformers | [
"Firas Khader",
"Jakob Nikolas Kather",
"Tianyu Han",
"Sven Nebelung",
"Christiane Kuhl",
"Johannes Stegmaier",
"Daniel Truhn"
] | Whole-Slide Imaging allows for the capturing and digitization of
high-resolution images of histological specimen. An automated analysis of such
images using deep learning models is therefore of high demand. The transformer
architecture has been proposed as a possible candidate for effectively
leveraging the high-resolution information. Here, the whole-slide image is
partitioned into smaller image patches and feature tokens are extracted from
these image patches. However, while the conventional transformer allows for a
simultaneous processing of a large set of input tokens, the computational
demand scales quadratically with the number of input tokens and thus
quadratically with the number of image patches. To address this problem we
propose a novel cascaded cross-attention network (CCAN) based on the
cross-attention mechanism that scales linearly with the number of extracted
patches. Our experiments demonstrate that this architecture is at least on-par
with and even outperforms other attention-based state-of-the-art methods on two
public datasets: On the use-case of lung cancer (TCGA NSCLC) our model reaches
a mean area under the receiver operating characteristic (AUC) of 0.970 $\pm$
0.008 and on renal cancer (TCGA RCC) reaches a mean AUC of 0.985 $\pm$ 0.004.
Furthermore, we show that our proposed model is efficient in low-data regimes,
making it a promising approach for analyzing whole-slide images in
resource-limited settings. To foster research in this direction, we make our
code publicly available on GitHub: XXX. | [
"cs.CV",
"cs.LG"
] | false |
2305.06965 | 2023-05-11T16:43:39Z | Transformers for CT Reconstruction From Monoplanar and Biplanar
Radiographs | [
"Firas Khader",
"Gustav Müller-Franzes",
"Tianyu Han",
"Sven Nebelung",
"Christiane Kuhl",
"Johannes Stegmaier",
"Daniel Truhn"
] | Computed Tomography (CT) scans provide detailed and accurate information of
internal structures in the body. They are constructed by sending x-rays through
the body from different directions and combining this information into a
three-dimensional volume. Such volumes can then be used to diagnose a wide
range of conditions and allow for volumetric measurements of organs. In this
work, we tackle the problem of reconstructing CT images from biplanar x-rays
only. X-rays are widely available and even if the CT reconstructed from these
radiographs is not a replacement of a complete CT in the diagnostic setting, it
might serve to spare the patients from radiation where a CT is only acquired
for rough measurements such as determining organ size. We propose a novel
method based on the transformer architecture, by framing the underlying task as
a language translation problem. Radiographs and CT images are first embedded
into latent quantized codebook vectors using two different autoencoder
networks. We then train a GPT model, to reconstruct the codebook vectors of the
CT image, conditioned on the codebook vectors of the x-rays and show that this
approach leads to realistic looking images. To encourage further research in
this direction, we make our code publicly available on GitHub: XXX. | [
"eess.IV",
"cs.CV"
] | false |
2305.07102 | 2023-05-11T19:24:33Z | Salient Mask-Guided Vision Transformer for Fine-Grained Classification | [
"Dmitry Demidov",
"Muhammad Hamza Sharif",
"Aliakbar Abdurahimov",
"Hisham Cholakkal",
"Fahad Shahbaz Khan"
] | Fine-grained visual classification (FGVC) is a challenging computer vision
problem, where the task is to automatically recognise objects from subordinate
categories. One of its main difficulties is capturing the most discriminative
inter-class variances among visually similar classes. Recently, methods with
Vision Transformer (ViT) have demonstrated noticeable achievements in FGVC,
generally by employing the self-attention mechanism with additional
resource-consuming techniques to distinguish potentially discriminative regions
while disregarding the rest. However, such approaches may struggle to
effectively focus on truly discriminative regions due to only relying on the
inherent self-attention mechanism, resulting in the classification token likely
aggregating global information from less-important background patches.
Moreover, due to the immense lack of the datapoints, classifiers may fail to
find the most helpful inter-class distinguishing features, since other
unrelated but distinctive background regions may be falsely recognised as being
valuable. To this end, we introduce a simple yet effective Salient Mask-Guided
Vision Transformer (SM-ViT), where the discriminability of the standard ViT`s
attention maps is boosted through salient masking of potentially discriminative
foreground regions. Extensive experiments demonstrate that with the standard
training procedure our SM-ViT achieves state-of-the-art performance on popular
FGVC benchmarks among existing ViT-based approaches while requiring fewer
resources and lower input image resolution. | [
"cs.CV",
"cs.AI"
] | false |
2305.07119 | 2023-05-11T20:17:41Z | Graph Neural Network for Accurate and Low-complexity SAR ATR | [
"Bingyi Zhang",
"Sasindu Wijeratne",
"Rajgopal Kannan",
"Viktor Prasanna",
"Carl Busart"
] | Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) is the key
technique for remote sensing image recognition. The state-of-the-art works
exploit the deep convolutional neural networks (CNNs) for SAR ATR, leading to
high computation costs. These deep CNN models are unsuitable to be deployed on
resource-limited platforms. In this work, we propose a graph neural network
(GNN) model to achieve accurate and low-latency SAR ATR. We transform the input
SAR image into the graph representation. The proposed GNN model consists of a
stack of GNN layers that operates on the input graph to perform target
classification. Unlike the state-of-the-art CNNs, which need heavy convolution
operations, the proposed GNN model has low computation complexity and achieves
comparable high accuracy. The GNN-based approach enables our proposed
\emph{input pruning} strategy. By filtering out the irrelevant vertices in the
input graph, we can reduce the computation complexity. Moreover, we propose the
\emph{model pruning} strategy to sparsify the model weight matrices which
further reduces the computation complexity. We evaluate the proposed GNN model
on the MSTAR dataset and ship discrimination dataset. The evaluation results
show that the proposed GNN model achieves 99.38\% and 99.7\% classification
accuracy on the above two datasets, respectively. The proposed pruning
strategies can prune 98.6\% input vertices and 97\% weight entries with
negligible accuracy loss. Compared with the state-of-the-art CNNs, the proposed
GNN model has only 1/3000 computation cost and 1/80 model size. | [
"cs.CV",
"cs.DC"
] | false |
2305.07128 | 2023-05-11T20:33:29Z | Pixel-wise rational model for structured light system | [
"Raúl Vargas",
"Lenny A. Romero",
"Song Zhang",
"Andres G. Marrugo"
] | This Letter presents a novel structured light system model that effectively
considers local lens distortion by pixel-wise rational functions. We leverage
the stereo method for initial calibration and then estimate the rational model
for each pixel. Our proposed model can achieve high measurement accuracy within
and outside the calibration volume, demonstrating its robustness and accuracy. | [
"physics.optics",
"cs.CV"
] | false |
2305.06912 | 2023-05-11T15:57:45Z | Meta-Learners for Few-Shot Weakly-Supervised Medical Image Segmentation | [
"Hugo Oliveira",
"Pedro H. T. Gama",
"Isabelle Bloch",
"Roberto Marcondes Cesar Jr"
] | Most uses of Meta-Learning in visual recognition are very often applied to
image classification, with a relative lack of works in other tasks {such} as
segmentation and detection. We propose a generic Meta-Learning framework for
few-shot weakly-supervised segmentation in medical imaging domains. We conduct
a comparative analysis of meta-learners from distinct paradigms adapted to
few-shot image segmentation in different sparsely annotated radiological tasks.
The imaging modalities include 2D chest, mammographic and dental X-rays, as
well as 2D slices of volumetric tomography and resonance images. Our
experiments consider a total of 9 meta-learners, 4 backbones and multiple
target organ segmentation tasks. We explore small-data scenarios in radiology
with varying weak annotation styles and densities. Our analysis shows that
metric-based meta-learning approaches achieve better segmentation results in
tasks with smaller domain shifts in comparison to the meta-training datasets,
while some gradient- and fusion-based meta-learners are more generalizable to
larger domain shifts. | [
"cs.CV",
"cs.LG",
"cs.NE"
] | false |
2305.06978 | 2023-05-11T17:06:37Z | Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image
Segmentation | [
"Ziyuan Zhao",
"Fangcheng Zhou",
"Zeng Zeng",
"Cuntai Guan",
"S. Kevin Zhou"
] | Domain shift and label scarcity heavily limit deep learning applications to
various medical image analysis tasks. Unsupervised domain adaptation (UDA)
techniques have recently achieved promising cross-modality medical image
segmentation by transferring knowledge from a label-rich source domain to an
unlabeled target domain. However, it is also difficult to collect annotations
from the source domain in many clinical applications, rendering most prior
works suboptimal with the label-scarce source domain, particularly for few-shot
scenarios, where only a few source labels are accessible. To achieve efficient
few-shot cross-modality segmentation, we propose a novel
transformation-consistent meta-hallucination framework, meta-hallucinator, with
the goal of learning to diversify data distributions and generate useful
examples for enhancing cross-modality performance. In our framework,
hallucination and segmentation models are jointly trained with the
gradient-based meta-learning strategy to synthesize examples that lead to good
segmentation performance on the target domain. To further facilitate data
hallucination and cross-domain knowledge transfer, we develop a self-ensembling
model with a hallucination-consistent property. Our meta-hallucinator can
seamlessly collaborate with the meta-segmenter for learning to hallucinate with
mutual benefits from a combined view of meta-learning and self-ensembling
learning. Extensive studies on MM-WHS 2017 dataset for cross-modality cardiac
segmentation demonstrate that our method performs favorably against various
approaches by a lot in the few-shot UDA scenario. | [
"cs.CV",
"cs.AI",
"eess.IV"
] | false |
2305.07135 | 2023-05-11T20:57:29Z | Divide-and-Conquer the NAS puzzle in Resource Constrained Federated
Learning Systems | [
"Yeshwanth Venkatesha",
"Youngeun Kim",
"Hyoungseob Park",
"Priyadarshini Panda"
] | Federated Learning (FL) is a privacy-preserving distributed machine learning
approach geared towards applications in edge devices. However, the problem of
designing custom neural architectures in federated environments is not tackled
from the perspective of overall system efficiency. In this paper, we propose
DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural
Architecture Search (NAS) in a federated system by systematically sampling the
search space. We propose a novel diversified sampling strategy that balances
exploration and exploitation of the search space by initially maximizing the
distance between the samples and progressively shrinking this distance as the
training progresses. We then perform channel pruning to reduce the training
complexity at the devices further. We show that our approach outperforms
several sampling strategies including Hadamard sampling, where the samples are
maximally separated. We evaluate our method on the CIFAR10, CIFAR100, EMNIST,
and TinyImagenet benchmarks and show a comprehensive analysis of different
aspects of federated learning such as scalability, and non-IID data. DC-NAS
achieves near iso-accuracy as compared to full-scale federated NAS with 50%
fewer resources. | [
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2305.07161 | 2023-05-11T22:20:05Z | A Deep Learning-based Compression and Classification Technique for Whole
Slide Histopathology Images | [
"Agnes Barsi",
"Suvendu Chandan Nayak",
"Sasmita Parida",
"Raj Mani Shukla"
] | This paper presents an autoencoder-based neural network architecture to
compress histopathological images while retaining the denser and more
meaningful representation of the original images. Current research into
improving compression algorithms is focused on methods allowing lower
compression rates for Regions of Interest (ROI-based approaches). Neural
networks are great at extracting meaningful semantic representations from
images, therefore are able to select the regions to be considered of interest
for the compression process. In this work, we focus on the compression of whole
slide histopathology images. The objective is to build an ensemble of neural
networks that enables a compressive autoencoder in a supervised fashion to
retain a denser and more meaningful representation of the input histology
images. Our proposed system is a simple and novel method to supervise
compressive neural networks. We test the compressed images using transfer
learning-based classifiers and show that they provide promising accuracy and
classification performance. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.07167 | 2023-05-11T22:40:47Z | OneCAD: One Classifier for All image Datasets using multimodal learning | [
"Shakti N. Wadekar",
"Eugenio Culurciello"
] | Vision-Transformers (ViTs) and Convolutional neural networks (CNNs) are
widely used Deep Neural Networks (DNNs) for classification task. These model
architectures are dependent on the number of classes in the dataset it was
trained on. Any change in number of classes leads to change (partial or full)
in the model's architecture. This work addresses the question: Is it possible
to create a number-of-class-agnostic model architecture?. This allows model's
architecture to be independent of the dataset it is trained on. This work
highlights the issues with the current architectures (ViTs and CNNs). Also,
proposes a training and inference framework OneCAD (One Classifier for All
image Datasets) to achieve close-to number-of-class-agnostic transformer model.
To best of our knowledge this is the first work to use Mask-Image-Modeling
(MIM) with multimodal learning for classification task to create a DNN model
architecture agnostic to the number of classes. Preliminary results are shown
on natural and medical image datasets. Datasets: MNIST, CIFAR10, CIFAR100 and
COVIDx. Code will soon be publicly available on github. | [
"cs.CV",
"cs.CL",
"cs.LG",
"eess.IV"
] | false |
2305.13918 | 2023-05-11T13:29:27Z | Development and Whole-Body Validation of Personalizable Female and Male
Pedestrian SAFER Human Body Models | [
"Natalia Lindgren",
"Qiantailang Yuan",
"Bengt Pipkorn",
"Svein Kleiven",
"Xiaogai Li"
] | Vulnerable road users are overrepresented in the worldwide number of
road-traffic injury victims. Developing biofidelic male and female pedestrian
HBMs representing a range of anthropometries is imperative to follow through
with the efforts to increase road safety and propose intervention strategies.
In this study, a 50th percentile male and female pedestrian of the SAFER HBM
was developed via a newly developed image registration-based mesh morphing
framework for subject personalization. The HBM and its accompanied
personalization framework were evaluated by means of a set of cadaver
experiments, where subjects were struck laterally by a generic sedan buck. In
the simulated whole-body pedestrian collisions, the personalized HBMs
demonstrate a good capability of reproducing the trajectories and head
kinematics observed in lateral impacts. The presented pedestrian HBMs and
personalization framework provide robust means to thoroughly and accurately
reconstruct and evaluate pedestrian-to-vehicle collisions. | [
"cs.CV",
"cs.RO",
"eess.IV"
] | false |
2305.15417 | 2023-05-11T11:51:41Z | Entropy-Aware Similarity for Balanced Clustering: A Case Study with
Melanoma Detection | [
"Seok Bin Son",
"Soohyun Park",
"Joongheon Kim"
] | Clustering data is an unsupervised learning approach that aims to divide a
set of data points into multiple groups. It is a crucial yet demanding subject
in machine learning and data mining. Its successful applications span various
fields. However, conventional clustering techniques necessitate the
consideration of balance significance in specific applications. Therefore, this
paper addresses the challenge of imbalanced clustering problems and presents a
new method for balanced clustering by utilizing entropy-aware similarity, which
can be defined as the degree of balances. We have coined the term,
entropy-aware similarity for balanced clustering (EASB), which maximizes
balance during clustering by complementary clustering of unbalanced data and
incorporating entropy in a novel similarity formula that accounts for both
angular differences and distances. The effectiveness of the proposed approach
is evaluated on actual melanoma medial data, specifically the International
Skin Imaging Collaboration (ISIC) 2019 and 2020 challenge datasets, to
demonstrate how it can successfully cluster the data while preserving balance.
Lastly, we can confirm that the proposed method exhibited outstanding
performance in detecting melanoma, comparing to classical methods. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.06646 | 2023-05-11T08:25:25Z | Object based Bayesian full-waveform inversion for shear elastography | [
"Ana Carpio",
"Elena Cebrian",
"Andrea Gutierrez"
] | We develop a computational framework to quantify uncertainty in shear
elastography imaging of anomalies in tissues. We adopt a Bayesian inference
formulation. Given the observed data, a forward model and their uncertainties,
we find the posterior probability of parameter fields representing the geometry
of the anomalies and their shear moduli. To construct a prior probability, we
exploit the topological energies of associated objective functions. We
demonstrate the approach on synthetic two dimensional tests with smooth and
irregular shapes. Sampling the posterior distribution by Markov Chain Monte
Carlo (MCMC) techniques we obtain statistical information on the shear moduli
and the geometrical properties of the anomalies. General affine-invariant
ensemble MCMC samplers are adequate for shapes characterized by parameter sets
of low to moderate dimension. However, MCMC methods are computationally
expensive. For simple shapes, we devise a fast optimization scheme to calculate
the maximum a posteriori (MAP) estimate representing the most likely parameter
values. Then, we approximate the posterior distribution by a Gaussian
distribution found by linearization about the MAP point to capture the main
mode at a low computational cost. | [
"math.NA",
"cs.CV",
"cs.NA",
"math.OC",
"physics.comp-ph",
"physics.data-an"
] | false |
2305.06535 | 2023-05-11T02:44:29Z | KGA: A General Machine Unlearning Framework Based on Knowledge Gap
Alignment | [
"Lingzhi Wang",
"Tong Chen",
"Wei Yuan",
"Xingshan Zeng",
"Kam-Fai Wong",
"Hongzhi Yin"
] | Recent legislation of the "right to be forgotten" has led to the interest in
machine unlearning, where the learned models are endowed with the function to
forget information about specific training instances as if they have never
existed in the training set. Previous work mainly focuses on computer vision
scenarios and largely ignores the essentials of unlearning in NLP field, where
text data contains more explicit and sensitive personal information than
images. In this paper, we propose a general unlearning framework called KGA to
induce forgetfulness. Different from previous work that tries to recover
gradients or forces models to perform close to one specific distribution, KGA
maintains distribution differences (i.e., knowledge gap). This relaxes the
distribution assumption. Furthermore, we first apply the unlearning method to
various NLP tasks (i.e., classification, translation, response generation) and
propose several unlearning evaluation metrics with pertinence. Experiments on
large-scale datasets show that KGA yields comprehensive improvements over
baselines, where extensive analyses further validate the effectiveness of KGA
and provide insight into unlearning for NLP tasks. | [
"cs.CL"
] | false |
2305.06539 | 2023-05-11T03:01:40Z | Semantic uncertainty guides the extension of conventions to new
referents | [
"Ron Eliav",
"Anya Ji",
"Yoav Artzi",
"Robert D. Hawkins"
] | A long tradition of studies in psycholinguistics has examined the formation
and generalization of ad hoc conventions in reference games, showing how newly
acquired conventions for a given target transfer to new referential contexts.
However, another axis of generalization remains understudied: how do
conventions formed for one target transfer to completely distinct targets, when
specific lexical choices are unlikely to repeat? This paper presents two dyadic
studies (N = 240) that address this axis of generalization, focusing on the
role of nameability -- the a priori likelihood that two individuals will share
the same label. We leverage the recently-released KiloGram dataset, a
collection of abstract tangram images that is orders of magnitude larger than
previously available, exhibiting high diversity of properties like nameability.
Our first study asks how nameability shapes convention formation, while the
second asks how new conventions generalize to entirely new targets of
reference. Our results raise new questions about how ad hoc conventions extend
beyond target-specific re-use of specific lexical choices. | [
"cs.CL"
] | false |
2305.06615 | 2023-05-11T07:23:01Z | Autocorrelations Decay in Texts and Applicability Limits of Language
Models | [
"Nikolay Mikhaylovskiy",
"Ilya Churilov"
] | We show that the laws of autocorrelations decay in texts are closely related
to applicability limits of language models. Using distributional semantics we
empirically demonstrate that autocorrelations of words in texts decay according
to a power law. We show that distributional semantics provides coherent
autocorrelations decay exponents for texts translated to multiple languages.
The autocorrelations decay in generated texts is quantitatively and often
qualitatively different from the literary texts. We conclude that language
models exhibiting Markov behavior, including large autoregressive language
models, may have limitations when applied to long texts, whether analysis or
generation. | [
"cs.CL",
"I.2.7"
] | false |
2305.06616 | 2023-05-11T07:25:47Z | Serial Contrastive Knowledge Distillation for Continual Few-shot
Relation Extraction | [
"Xinyi Wang",
"Zitao Wang",
"Wei Hu"
] | Continual few-shot relation extraction (RE) aims to continuously train a
model for new relations with few labeled training data, of which the major
challenges are the catastrophic forgetting of old relations and the overfitting
caused by data sparsity. In this paper, we propose a new model, namely SCKD, to
accomplish the continual few-shot RE task. Specifically, we design serial
knowledge distillation to preserve the prior knowledge from previous models and
conduct contrastive learning with pseudo samples to keep the representations of
samples in different relations sufficiently distinguishable. Our experiments on
two benchmark datasets validate the effectiveness of SCKD for continual
few-shot RE and its superiority in knowledge transfer and memory utilization
over state-of-the-art models. | [
"cs.CL"
] | false |
2305.06620 | 2023-05-11T07:32:20Z | Improving Continual Relation Extraction by Distinguishing Analogous
Semantics | [
"Wenzheng Zhao",
"Yuanning Cui",
"Wei Hu"
] | Continual relation extraction (RE) aims to learn constantly emerging
relations while avoiding forgetting the learned relations. Existing works store
a small number of typical samples to re-train the model for alleviating
forgetting. However, repeatedly replaying these samples may cause the
overfitting problem. We conduct an empirical study on existing works and
observe that their performance is severely affected by analogous relations. To
address this issue, we propose a novel continual extraction model for analogous
relations. Specifically, we design memory-insensitive relation prototypes and
memory augmentation to overcome the overfitting problem. We also introduce
integrated training and focal knowledge distillation to enhance the performance
on analogous relations. Experimental results show the superiority of our model
and demonstrate its effectiveness in distinguishing analogous relations and
overcoming overfitting. | [
"cs.CL"
] | false |
2305.06747 | 2023-05-11T12:10:20Z | The First Parallel Corpora for Kurdish Sign Language | [
"Zina Kamal",
"Hossein Hassani"
] | Kurdish Sign Language (KuSL) is the natural language of the Kurdish Deaf
people. We work on automatic translation between spoken Kurdish and KuSL. Sign
languages evolve rapidly and follow grammatical rules that differ from spoken
languages. Consequently,those differences should be considered during any
translation. We proposed an avatar-based automatic translation of Kurdish texts
in the Sorani (Central Kurdish) dialect into the Kurdish Sign language. We
developed the first parallel corpora for that pair that we use to train a
Statistical Machine Translation (SMT) engine. We tested the outcome
understandability and evaluated it using the Bilingual Evaluation Understudy
(BLEU). Results showed 53.8% accuracy. Compared to the previous experiments in
the field, the result is considerably high. We suspect the reason to be the
similarity between the structure of the two pairs. We plan to make the
resources publicly available under CC BY-NC-SA 4.0 license on the Kurdish-BLARK
(https://kurdishblark.github.io/). | [
"cs.CL"
] | false |
2305.06801 | 2023-05-11T13:42:58Z | Detecting Idiomatic Multiword Expressions in Clinical Terminology using
Definition-Based Representation Learning | [
"François Remy",
"Alfiya Khabibullina",
"Thomas Demeester"
] | This paper shines a light on the potential of definition-based semantic
models for detecting idiomatic and semi-idiomatic multiword expressions (MWEs)
in clinical terminology. Our study focuses on biomedical entities defined in
the UMLS ontology and aims to help prioritize the translation efforts of these
entities. In particular, we develop an effective tool for scoring the
idiomaticity of biomedical MWEs based on the degree of similarity between the
semantic representations of those MWEs and a weighted average of the
representation of their constituents. We achieve this using a biomedical
language model trained to produce similar representations for entity names and
their definitions, called BioLORD. The importance of this definition-based
approach is highlighted by comparing the BioLORD model to two other
state-of-the-art biomedical language models based on Transformer: SapBERT and
CODER. Our results show that the BioLORD model has a strong ability to identify
idiomatic MWEs, not replicated in other models. Our corpus-free idiomaticity
estimation helps ontology translators to focus on more challenging MWEs. | [
"cs.CL"
] | false |
2305.06818 | 2023-05-11T14:12:55Z | Towards a Computational Analysis of Suspense: Detecting Dangerous
Situations | [
"Albin Zehe",
"Julian Schröter",
"Andreas Hotho"
] | Suspense is an important tool in storytelling to keep readers engaged and
wanting to read more. However, it has so far not been studied extensively in
Computational Literary Studies. In this paper, we focus on one of the elements
authors can use to build up suspense: dangerous situations. We introduce a
corpus of texts annotated with dangerous situations, distinguishing between 7
types of danger. Additionally, we annotate parts of the text that describe fear
experienced by a character, regardless of the actual presence of danger. We
present experiments towards the automatic detection of these situations,
finding that unsupervised baseline methods can provide valuable signals for the
detection, but more complex methods are necessary for further analysis. Not
unexpectedly, the description of danger and fear often relies heavily on the
context, both local (e.g., situations where danger is only mentioned, but not
actually present) and global (e.g., "storm" being used in a literal sense in an
adventure novel, but metaphorically in a romance novel). | [
"cs.CL",
"I.2.7"
] | false |
2305.06892 | 2023-05-11T15:29:04Z | IUST_NLP at SemEval-2023 Task 10: Explainable Detecting Sexism with
Transformers and Task-adaptive Pretraining | [
"Hadiseh Mahmoudi"
] | This paper describes our system on SemEval-2023 Task 10: Explainable
Detection of Online Sexism (EDOS). This work aims to design an automatic system
for detecting and classifying sexist content in online spaces. We propose a set
of transformer-based pre-trained models with task-adaptive pretraining and
ensemble learning. The main contributions of our system include analyzing the
performance of different transformer-based pre-trained models and combining
these models, as well as providing an efficient method using large amounts of
unlabeled data for model adaptive pretraining. We have also explored several
other strategies. On the test dataset, our system achieves F1-scores of 83%,
64%, and 47% on subtasks A, B, and C, respectively. | [
"cs.CL"
] | false |
2305.07005 | 2023-05-11T17:44:29Z | Subword Segmental Machine Translation: Unifying Segmentation and Target
Sentence Generation | [
"Francois Meyer",
"Jan Buys"
] | Subword segmenters like BPE operate as a preprocessing step in neural machine
translation and other (conditional) language models. They are applied to
datasets before training, so translation or text generation quality relies on
the quality of segmentations. We propose a departure from this paradigm, called
subword segmental machine translation (SSMT). SSMT unifies subword segmentation
and MT in a single trainable model. It learns to segment target sentence words
while jointly learning to generate target sentences. To use SSMT during
inference we propose dynamic decoding, a text generation algorithm that adapts
segmentations as it generates translations. Experiments across 6 translation
directions show that SSMT improves chrF scores for morphologically rich
agglutinative languages. Gains are strongest in the very low-resource scenario.
SSMT also learns subwords that are closer to morphemes compared to baselines
and proves more robust on a test set constructed for evaluating morphological
compositional generalisation. | [
"cs.CL"
] | false |
2305.07016 | 2023-05-11T17:55:45Z | A General-Purpose Multilingual Document Encoder | [
"Onur Galoğlu",
"Robert Litschko",
"Goran Glavaš"
] | Massively multilingual pretrained transformers (MMTs) have tremendously
pushed the state of the art on multilingual NLP and cross-lingual transfer of
NLP models in particular. While a large body of work leveraged MMTs to mine
parallel data and induce bilingual document embeddings, much less effort has
been devoted to training general-purpose (massively) multilingual document
encoder that can be used for both supervised and unsupervised document-level
tasks. In this work, we pretrain a massively multilingual document encoder as a
hierarchical transformer model (HMDE) in which a shallow document transformer
contextualizes sentence representations produced by a state-of-the-art
pretrained multilingual sentence encoder. We leverage Wikipedia as a readily
available source of comparable documents for creating training data, and train
HMDE by means of a cross-lingual contrastive objective, further exploiting the
category hierarchy of Wikipedia for creation of difficult negatives. We
evaluate the effectiveness of HMDE in two arguably most common and prominent
cross-lingual document-level tasks: (1) cross-lingual transfer for topical
document classification and (2) cross-lingual document retrieval. HMDE is
significantly more effective than (i) aggregations of segment-based
representations and (ii) multilingual Longformer. Crucially, owing to its
massively multilingual lower transformer, HMDE successfully generalizes to
languages unseen in document-level pretraining. We publicly release our code
and models at
https://github.com/ogaloglu/pre-training-multilingual-document-encoders . | [
"cs.CL"
] | false |
2305.07085 | 2023-05-11T18:48:18Z | Enhancing Contrastive Learning with Noise-Guided Attack: Towards
Continual Relation Extraction in the Wild | [
"Ting Wu",
"Jingyi Liu",
"Rui Zheng",
"Qi Zhang",
"Tao Gui",
"Xuanjing Huang"
] | The principle of continual relation extraction~(CRE) involves adapting to
emerging novel relations while preserving od knowledge. While current endeavors
in CRE succeed in preserving old knowledge, they tend to fail when exposed to
contaminated data streams. We assume this is attributed to their reliance on an
artificial hypothesis that the data stream has no annotation errors, which
hinders real-world applications for CRE. Considering the ubiquity of noisy
labels in real-world datasets, in this paper, we formalize a more practical
learning scenario, termed as \textit{noisy-CRE}. Building upon this challenging
setting, we develop a noise-resistant contrastive framework named as
\textbf{N}oise-guided \textbf{a}ttack in \textbf{C}ontrative
\textbf{L}earning~(NaCL) to learn incremental corrupted relations. Compared to
direct noise discarding or inaccessible noise relabeling, we present modifying
the feature space to match the given noisy labels via attacking can better
enrich contrastive representations. Extensive empirical validations highlight
that NaCL can achieve consistent performance improvements with increasing noise
rates, outperforming state-of-the-art baselines. | [
"cs.CL"
] | false |
2305.07151 | 2023-05-11T21:41:41Z | Overinformative Question Answering by Humans and Machines | [
"Polina Tsvilodub",
"Michael Franke",
"Robert D. Hawkins",
"Noah D. Goodman"
] | When faced with a polar question, speakers often provide overinformative
answers going beyond a simple "yes" or "no". But what principles guide the
selection of additional information? In this paper, we provide experimental
evidence from two studies suggesting that overinformativeness in human
answering is driven by considerations of relevance to the questioner's goals
which they flexibly adjust given the functional context in which the question
is uttered. We take these human results as a strong benchmark for investigating
question-answering performance in state-of-the-art neural language models,
conducting an extensive evaluation on items from human experiments. We find
that most models fail to adjust their answering behavior in a human-like way
and tend to include irrelevant information. We show that GPT-3 is highly
sensitive to the form of the prompt and only achieves human-like answer
patterns when guided by an example and cognitively-motivated explanation. | [
"cs.CL"
] | false |
2305.10436 | 2023-05-11T20:58:10Z | SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and
Visual Cues | [
"Jaewook Lee",
"Andrew Lan"
] | In second language vocabulary learning, existing works have primarily focused
on either the learning interface or scheduling personalized retrieval practices
to maximize memory retention. However, the learning content, i.e., the
information presented on flashcards, has mostly remained constant. Keyword
mnemonic is a notable learning strategy that relates new vocabulary to existing
knowledge by building an acoustic and imagery link using a keyword that sounds
alike. Beyond that, producing verbal and visual cues associated with the
keyword to facilitate building these links requires a manual process and is not
scalable. In this paper, we explore an opportunity to use large language models
to automatically generate verbal and visual cues for keyword mnemonics. Our
approach, an end-to-end pipeline for auto-generating verbal and visual cues,
can automatically generate highly memorable cues. We investigate the
effectiveness of our approach via a human participant experiment by comparing
it with manually generated cues. | [
"cs.CL"
] | false |
2305.13317 | 2023-05-11T17:20:49Z | A Novel Dataset Towards Extracting Virus-Host Interactions | [
"Rasha Alshawi",
"Atriya Sen",
"Nathan S. Upham",
"Beckett Sterner"
] | We describe a novel dataset for the automated recognition of named taxonomic
and other entities relevant to the association of viruses with their hosts. We
further describe some initial results using pre-trained models on the
named-entity recognition (NER) task on this novel dataset. We propose that our
dataset of manually annotated abstracts now offers a Gold Standard Corpus for
training future NER models in the automated extraction of host-pathogen
detection methods from scientific publications, and further explain how our
work makes first steps towards predicting the important human health-related
concept of viral spillover risk automatically from the scientific literature. | [
"cs.CL"
] | false |
2306.01743 | 2023-05-11T14:34:08Z | Abugida Normalizer and Parser for Unicode texts | [
"Nazmuddoha Ansary",
"Quazi Adibur Rahman Adib",
"Tahsin Reasat",
"Sazia Mehnaz",
"Asif Shahriyar Sushmit",
"Ahmed Imtiaz Humayun",
"Mohammad Mamun Or Rashid",
"Farig Sadeque"
] | This paper proposes two libraries to address common and uncommon issues with
Unicode-based writing schemes for Indic languages. The first is a normalizer
that corrects inconsistencies caused by the encoding scheme
https://pypi.org/project/bnunicodenormalizer/ . The second is a grapheme parser
for Abugida text https://pypi.org/project/indicparser/ . Both tools are more
efficient and effective than previously used tools. We report 400% increase in
speed and ensure significantly better performance for different language model
based downstream tasks. | [
"cs.CL"
] | false |
2305.06522 | 2023-05-11T01:50:16Z | Randomized Smoothing with Masked Inference for Adversarially Robust Text
Classifications | [
"Han Cheol Moon",
"Shafiq Joty",
"Ruochen Zhao",
"Megh Thakkar",
"Xu Chi"
] | Large-scale pre-trained language models have shown outstanding performance in
a variety of NLP tasks. However, they are also known to be significantly
brittle against specifically crafted adversarial examples, leading to
increasing interest in probing the adversarial robustness of NLP systems. We
introduce RSMI, a novel two-stage framework that combines randomized smoothing
(RS) with masked inference (MI) to improve the adversarial robustness of NLP
systems. RS transforms a classifier into a smoothed classifier to obtain robust
representations, whereas MI forces a model to exploit the surrounding context
of a masked token in an input sequence. RSMI improves adversarial robustness by
2 to 3 times over existing state-of-the-art methods on benchmark datasets. We
also perform in-depth qualitative analysis to validate the effectiveness of the
different stages of RSMI and probe the impact of its components through
extensive ablations. By empirically proving the stability of RSMI, we put it
forward as a practical method to robustly train large-scale NLP models. Our
code and datasets are available at https://github.com/Han8931/rsmi_nlp | [
"cs.CL",
"cs.AI"
] | false |
2305.06545 | 2023-05-11T03:21:56Z | GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark | [
"Dongyang Li",
"Ruixue Ding",
"Qiang Zhang",
"Zheng Li",
"Boli Chen",
"Pengjun Xie",
"Yao Xu",
"Xin Li",
"Ning Guo",
"Fei Huang",
"Xiaofeng He"
] | With a fast developing pace of geographic applications, automatable and
intelligent models are essential to be designed to handle the large volume of
information. However, few researchers focus on geographic natural language
processing, and there has never been a benchmark to build a unified standard.
In this work, we propose a GeoGraphic Language Understanding Evaluation
benchmark, named GeoGLUE. We collect data from open-released geographic
resources and introduce six natural language understanding tasks, including
geographic textual similarity on recall, geographic textual similarity on
rerank, geographic elements tagging, geographic composition analysis,
geographic where what cut, and geographic entity alignment. We also pro vide
evaluation experiments and analysis of general baselines, indicating the
effectiveness and significance of the GeoGLUE benchmark. | [
"cs.CL",
"cs.AI"
] | false |
2305.06574 | 2023-05-11T05:17:54Z | A Fused Gromov-Wasserstein Framework for Unsupervised Knowledge Graph
Entity Alignment | [
"Jianheng Tang",
"Kangfei Zhao",
"Jia Li"
] | Entity alignment is the task of identifying corresponding entities across
different knowledge graphs (KGs). Although recent embedding-based entity
alignment methods have shown significant advancements, they still struggle to
fully utilize KG structural information. In this paper, we introduce FGWEA, an
unsupervised entity alignment framework that leverages the Fused
Gromov-Wasserstein (FGW) distance, allowing for a comprehensive comparison of
entity semantics and KG structures within a joint optimization framework. To
address the computational challenges associated with optimizing FGW, we devise
a three-stage progressive optimization algorithm. It starts with a basic
semantic embedding matching, proceeds to approximate cross-KG structural and
relational similarity matching based on iterative updates of high-confidence
entity links, and ultimately culminates in a global structural comparison
between KGs. We perform extensive experiments on four entity alignment datasets
covering 14 distinct KGs across five languages. Without any supervision or
hyper-parameter tuning, FGWEA surpasses 21 competitive baselines, including
cutting-edge supervised entity alignment methods. Our code is available at
https://github.com/squareRoot3/FusedGW-Entity-Alignment. | [
"cs.CL",
"cs.AI"
] | false |
2305.06683 | 2023-05-11T09:40:24Z | Cost-efficient Crowdsourcing for Span-based Sequence Labeling: Worker
Selection and Data Augmentation | [
"Yujie Wang",
"Chao Huang",
"Liner Yang",
"Zhixuan Fang",
"Yaping Huang",
"Yang Liu",
"Erhong Yang"
] | This paper introduces a novel worker selection algorithm, enhancing
annotation quality and reducing costs in challenging span-based sequence
labeling tasks in Natural Language Processing (NLP). Unlike previous studies
targeting simpler tasks, this study contends with the complexities of label
interdependencies in sequence labeling tasks. The proposed algorithm utilizes a
Combinatorial Multi-Armed Bandit (CMAB) approach for worker selection. The
challenge of dealing with imbalanced and small-scale datasets, which hinders
offline simulation of worker selection, is tackled using an innovative data
augmentation method termed shifting, expanding, and shrinking (SES). The SES
method is designed specifically for sequence labeling tasks. Rigorous testing
on CoNLL 2003 NER and Chinese OEI datasets showcased the algorithm's
efficiency, with an increase in F1 score up to 100.04% of the expert-only
baseline, alongside cost savings up to 65.97%. The paper also encompasses a
dataset-independent test emulating annotation evaluation through a Bernoulli
distribution, which still led to an impressive 97.56% F1 score of the expert
baseline and 59.88% cost savings. This research addresses and overcomes
numerous obstacles in worker selection for complex NLP tasks. | [
"cs.CL",
"cs.AI"
] | false |