arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.06812 | 2023-05-11T14:08:53Z | THUIR@COLIEE 2023: Incorporating Structural Knowledge into Pre-trained
Language Models for Legal Case Retrieval | [
"Haitao Li",
"Weihang Su",
"Changyue Wang",
"Yueyue Wu",
"Qingyao Ai",
"Yiqun Liu"
] | Legal case retrieval techniques play an essential role in modern intelligent
legal systems. As an annually well-known international competition, COLIEE is
aiming to achieve the state-of-the-art retrieval model for legal texts. This
paper summarizes the approach of the championship team THUIR in COLIEE 2023. To
be specific, we design structure-aware pre-trained language models to enhance
the understanding of legal cases. Furthermore, we propose heuristic
pre-processing and post-processing approaches to reduce the influence of
irrelevant messages. In the end, learning-to-rank methods are employed to merge
features with different dimensions. Experimental results demonstrate the
superiority of our proposal. Official results show that our run has the best
performance among all submissions. The implementation of our method can be
found at https://github.com/CSHaitao/THUIR-COLIEE2023. | [
"cs.IR",
"cs.CL"
] | false |
2305.06817 | 2023-05-11T14:11:48Z | THUIR@COLIEE 2023: More Parameters and Legal Knowledge for Legal Case
Entailment | [
"Haitao Li",
"Changyue Wang",
"Weihang Su",
"Yueyue Wu",
"Qingyao Ai",
"Yiqun Liu"
] | This paper describes the approach of the THUIR team at the COLIEE 2023 Legal
Case Entailment task. This task requires the participant to identify a specific
paragraph from a given supporting case that entails the decision for the query
case. We try traditional lexical matching methods and pre-trained language
models with different sizes. Furthermore, learning-to-rank methods are employed
to further improve performance. However, learning-to-rank is not very robust on
this task. which suggests that answer passages cannot simply be determined with
information retrieval techniques. Experimental results show that more
parameters and legal knowledge contribute to the legal case entailment task.
Finally, we get the third place in COLIEE 2023. The implementation of our
method can be found at https://github.com/CSHaitao/THUIR-COLIEE2023. | [
"cs.CL",
"cs.IR"
] | false |
2305.06993 | 2023-05-11T17:29:47Z | SMATCH++: Standardized and Extended Evaluation of Semantic Graphs | [
"Juri Opitz"
] | The Smatch metric is a popular method for evaluating graph distances, as is
necessary, for instance, to assess the performance of semantic graph parsing
systems. However, we observe some issues in the metric that jeopardize
meaningful evaluation. E.g., opaque pre-processing choices can affect results,
and current graph-alignment solvers do not provide us with upper-bounds.
Without upper-bounds, however, fair evaluation is not guaranteed. Furthermore,
adaptions of Smatch for extended tasks (e.g., fine-grained semantic similarity)
are spread out, and lack a unifying framework.
For better inspection, we divide the metric into three modules:
pre-processing, alignment, and scoring. Examining each module, we specify its
goals and diagnose potential issues, for which we discuss and test mitigation
strategies. For pre-processing, we show how to fully conform to annotation
guidelines that allow structurally deviating but valid graphs. For safer and
enhanced alignment, we show the feasibility of optimal alignment in a standard
evaluation setup, and develop a lossless graph compression method that shrinks
the search space and significantly increases efficiency. For improved scoring,
we propose standardized and extended metric calculation of fine-grained
sub-graph meaning aspects. Our code is available at
https://github.com/flipz357/smatchpp | [
"cs.CL",
"cs.AI"
] | false |
2305.07001 | 2023-05-11T17:39:07Z | Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach | [
"Junjie Zhang",
"Ruobing Xie",
"Yupeng Hou",
"Wayne Xin Zhao",
"Leyu Lin",
"Ji-Rong Wen"
] | In the past decades, recommender systems have attracted much attention in
both research and industry communities, and a large number of studies have been
devoted to developing effective recommendation models. Basically speaking,
these models mainly learn the underlying user preference from historical
behavior data, and then estimate the user-item matching relationships for
recommendations. Inspired by the recent progress on large language models
(LLMs), we take a different approach to developing the recommendation models,
considering recommendation as instruction following by LLMs. The key idea is
that the preferences or needs of a user can be expressed in natural language
descriptions (called instructions), so that LLMs can understand and further
execute the instruction for fulfilling the recommendation task. Instead of
using public APIs of LLMs, we instruction tune an open-source LLM (3B
Flan-T5-XL), in order to better adapt LLMs to recommender systems. For this
purpose, we first design a general instruction format for describing the
preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a
large amount of user-personalized instruction data (252K instructions) with
varying types of preferences and intentions. To demonstrate the effectiveness
of our approach, we instantiate the instruction templates into several
widely-studied recommendation (or search) tasks, and conduct extensive
experiments on these tasks with real-world datasets. Experiment results show
that the proposed approach can outperform several competitive baselines,
including the powerful GPT-3.5, on these evaluation tasks. Our approach sheds
light on developing more user-friendly recommender systems, in which users can
freely communicate with the system and obtain more accurate recommendations via
natural language instructions. | [
"cs.IR",
"cs.CL"
] | false |
2305.07157 | 2023-05-11T22:07:27Z | Exploring Zero and Few-shot Techniques for Intent Classification | [
"Soham Parikh",
"Quaizar Vohra",
"Prashil Tumbade",
"Mitul Tiwari"
] | Conversational NLU providers often need to scale to thousands of
intent-classification models where new customers often face the cold-start
problem. Scaling to so many customers puts a constraint on storage space as
well. In this paper, we explore four different zero and few-shot intent
classification approaches with this low-resource constraint: 1) domain
adaptation, 2) data augmentation, 3) zero-shot intent classification using
descriptions large language models (LLMs), and 4) parameter-efficient
fine-tuning of instruction-finetuned language models. Our results show that all
these approaches are effective to different degrees in low-resource settings.
Parameter-efficient fine-tuning using T-few recipe (Liu et al., 2022) on
Flan-T5 (Chang et al., 2022) yields the best performance even with just one
sample per intent. We also show that the zero-shot method of prompting LLMs
using intent descriptions | [
"cs.CL",
"cs.AI"
] | false |
2305.10433 | 2023-05-11T11:56:42Z | Toxicity Inspector: A Framework to Evaluate Ground Truth in Toxicity
Detection Through Feedback | [
"Huriyyah Althunayan",
"Rahaf Bahlas",
"Manar Alharbi",
"Lena Alsuwailem",
"Abeer Aldayel",
"Rehab ALahmadi"
] | Toxic language is difficult to define, as it is not monolithic and has many
variations in perceptions of toxicity. This challenge of detecting toxic
language is increased by the highly contextual and subjectivity of its
interpretation, which can degrade the reliability of datasets and negatively
affect detection model performance. To fill this void, this paper introduces a
toxicity inspector framework that incorporates a human-in-the-loop pipeline
with the aim of enhancing the reliability of toxicity benchmark datasets by
centering the evaluator's values through an iterative feedback cycle. The
centerpiece of this framework is the iterative feedback process, which is
guided by two metric types (hard and soft) that provide evaluators and dataset
creators with insightful examination to balance the tradeoff between
performance gains and toxicity avoidance. | [
"cs.CL",
"cs.SI"
] | false |
2305.06530 | 2023-05-11T02:29:53Z | How Good are Commercial Large Language Models on African Languages? | [
"Jessica Ojo",
"Kelechi Ogueji"
] | Recent advancements in Natural Language Processing (NLP) has led to the
proliferation of large pretrained language models. These models have been shown
to yield good performance, using in-context learning, even on unseen tasks and
languages. They have also been exposed as commercial APIs as a form of
language-model-as-a-service, with great adoption. However, their performance on
African languages is largely unknown. We present a preliminary analysis of
commercial large language models on two tasks (machine translation and text
classification) across eight African languages, spanning different language
families and geographical areas. Our results suggest that commercial language
models produce below-par performance on African languages. We also find that
they perform better on text classification than machine translation. In
general, our findings present a call-to-action to ensure African languages are
well represented in commercial large language models, given their growing
popularity. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.06555 | 2023-05-11T04:19:08Z | Domain Incremental Lifelong Learning in an Open World | [
"Yi Dai",
"Hao Lang",
"Yinhe Zheng",
"Bowen Yu",
"Fei Huang",
"Yongbin Li"
] | Lifelong learning (LL) is an important ability for NLP models to learn new
tasks continuously. Architecture-based approaches are reported to be effective
implementations for LL models. However, it is non-trivial to extend previous
approaches to domain incremental LL scenarios since they either require access
to task identities in the testing phase or cannot handle samples from unseen
tasks. In this paper, we propose \textbf{Diana}: a
\underline{d}ynam\underline{i}c \underline{a}rchitecture-based
lifelo\underline{n}g le\underline{a}rning model that tries to learn a sequence
of tasks with a prompt-enhanced language model. Four types of hierarchically
organized prompts are used in Diana to capture knowledge from different
granularities. Specifically, we dedicate task-level prompts to capture
task-specific knowledge to retain high LL performances and maintain
instance-level prompts to learn knowledge shared across input samples to
improve the model's generalization performance. Moreover, we dedicate separate
prompts to explicitly model unseen tasks and introduce a set of prompt key
vectors to facilitate knowledge sharing between tasks. Extensive experiments
demonstrate that Diana outperforms state-of-the-art LL models, especially in
handling unseen tasks. We release the code and data at
\url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana}. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | true |
2305.06557 | 2023-05-11T04:28:58Z | Long-Tailed Question Answering in an Open World | [
"Yi Dai",
"Hao Lang",
"Yinhe Zheng",
"Fei Huang",
"Yongbin Li"
] | Real-world data often have an open long-tailed distribution, and building a
unified QA model supporting various tasks is vital for practical QA
applications. However, it is non-trivial to extend previous QA approaches since
they either require access to seen tasks of adequate samples or do not
explicitly model samples from unseen tasks. In this paper, we define Open
Long-Tailed QA (OLTQA) as learning from long-tailed distributed data and
optimizing performance over seen and unseen QA tasks. We propose an OLTQA model
that encourages knowledge sharing between head, tail and unseen tasks, and
explicitly mines knowledge from a large pre-trained language model (LM).
Specifically, we organize our model through a pool of fine-grained components
and dynamically combine these components for an input to facilitate knowledge
sharing. A retrieve-then-rerank frame is further introduced to select
in-context examples, which guild the LM to generate text that express knowledge
for QA tasks. Moreover, a two-stage training approach is introduced to
pre-train the framework by knowledge distillation (KD) from the LM and then
jointly train the frame and a QA model through an adaptive mutual KD method. On
a large-scale OLTQA dataset we curate from 43 existing QA datasets, our model
consistently outperforms the state-of-the-art. We release the code and data at
\url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/oltqa}. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.06897 | 2023-05-11T15:34:53Z | AfriQA: Cross-lingual Open-Retrieval Question Answering for African
Languages | [
"Odunayo Ogundepo",
"Tajuddeen R. Gwadabe",
"Clara E. Rivera",
"Jonathan H. Clark",
"Sebastian Ruder",
"David Ifeoluwa Adelani",
"Bonaventure F. P. Dossou",
"Abdou Aziz DIOP",
"Claytone Sikasote",
"Gilles Hacheme",
"Happy Buzaaba",
"Ignatius Ezeani",
"Rooweither Mabuya",
"Salomey Osei",
"Chris Emezue",
"Albert Njoroge Kahira",
"Shamsuddeen H. Muhammad",
"Akintunde Oladipo",
"Abraham Toluwase Owodunni",
"Atnafu Lambebo Tonja",
"Iyanuoluwa Shode",
"Akari Asai",
"Tunde Oluwaseyi Ajayi",
"Clemencia Siro",
"Steven Arthur",
"Mofetoluwa Adeyemi",
"Orevaoghene Ahia",
"Anuoluwapo Aremu",
"Oyinkansola Awosan",
"Chiamaka Chukwuneke",
"Bernard Opoku",
"Awokoya Ayodele",
"Verrah Otiende",
"Christine Mwase",
"Boyd Sinkala",
"Andre Niyongabo Rubungo",
"Daniel A. Ajisafe",
"Emeka Felix Onwuegbuzia",
"Habib Mbow",
"Emile Niyomutabazi",
"Eunice Mukonde",
"Falalu Ibrahim Lawan",
"Ibrahim Said Ahmad",
"Jesujoba O. Alabi",
"Martin Namukombo",
"Mbonu Chinedu",
"Mofya Phiri",
"Neo Putini",
"Ndumiso Mngoma",
"Priscilla A. Amuok",
"Ruqayya Nasir Iro",
"Sonia Adhiambo"
] | African languages have far less in-language content available digitally,
making it challenging for question answering systems to satisfy the information
needs of users. Cross-lingual open-retrieval question answering (XOR QA)
systems -- those that retrieve answer content from other languages while
serving people in their native language -- offer a means of filling this gap.
To this end, we create AfriQA, the first cross-lingual QA dataset with a focus
on African languages. AfriQA includes 12,000+ XOR QA examples across 10 African
languages. While previous datasets have focused primarily on languages where
cross-lingual QA augments coverage from the target language, AfriQA focuses on
languages where cross-lingual answer content is the only high-coverage source
of answer content. Because of this, we argue that African languages are one of
the most important and realistic use cases for XOR QA. Our experiments
demonstrate the poor performance of automatic translation and multilingual
retrieval methods. Overall, AfriQA proves challenging for state-of-the-art QA
models. We hope that the dataset enables the development of more equitable QA
technology. | [
"cs.CL",
"cs.AI",
"cs.IR"
] | false |
2305.07095 | 2023-05-11T19:01:13Z | Are Machine Rationales (Not) Useful to Humans? Measuring and Improving
Human Utility of Free-Text Rationales | [
"Brihi Joshi",
"Ziyi Liu",
"Sahana Ramnath",
"Aaron Chan",
"Zhewei Tong",
"Shaoliang Nie",
"Qifan Wang",
"Yejin Choi",
"Xiang Ren"
] | Among the remarkable emergent capabilities of large language models (LMs) is
free-text rationalization; beyond a certain scale, large LMs are capable of
generating seemingly useful rationalizations, which in turn, can dramatically
enhance their performances on leaderboards. This phenomenon raises a question:
can machine generated rationales also be useful for humans, especially when lay
humans try to answer questions based on those machine rationales? We observe
that human utility of existing rationales is far from satisfactory, and
expensive to estimate with human studies. Existing metrics like task
performance of the LM generating the rationales, or similarity between
generated and gold rationales are not good indicators of their human utility.
While we observe that certain properties of rationales like conciseness and
novelty are correlated with their human utility, estimating them without human
involvement is challenging. We show that, by estimating a rationale's
helpfulness in answering similar unseen instances, we can measure its human
utility to a better extent. We also translate this finding into an automated
score, GEN-U, that we propose, which can help improve LMs' ability to generate
rationales with better human utility, while maintaining most of its task
performance. Lastly, we release all code and collected data with this project. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.06523 | 2023-05-11T01:54:45Z | A fast topological approach for predicting anomalies in time-varying
graphs | [
"Umar Islambekov",
"Hasani Pathirana",
"Omid Khormali",
"Cuneyt Akcora",
"Ekaterina Smirnova"
] | Large time-varying graphs are increasingly common in financial, social and
biological settings. Feature extraction that efficiently encodes the complex
structure of sparse, multi-layered, dynamic graphs presents computational and
methodological challenges. In the past decade, a persistence diagram (PD) from
topological data analysis (TDA) has become a popular descriptor of shape of
data with a well-defined distance between points. However, applications of TDA
to graphs, where there is no intrinsic concept of distance between the nodes,
remain largely unexplored. This paper addresses this gap in the literature by
introducing a computationally efficient framework to extract shape information
from graph data. Our framework has two main steps: first, we compute a PD using
the so-called lower-star filtration which utilizes quantitative node
attributes, and then vectorize it by averaging the associated Betti function
over successive scale values on a one-dimensional grid. Our approach avoids
embedding a graph into a metric space and has stability properties against
input noise. In simulation studies, we show that the proposed vector summary
leads to improved change point detection rate in time-varying graphs. In a real
data application, our approach provides up to 22% gain in anomalous price
prediction for the Ethereum cryptocurrency transaction networks. | [
"cs.LG"
] | false |
2305.06624 | 2023-05-11T07:43:40Z | Matrix tri-factorization over the tropical semiring | [
"Amra Omanović",
"Polona Oblak",
"Tomaž Curk"
] | Tropical semiring has proven successful in several research areas, including
optimal control, bioinformatics, discrete event systems, or solving a decision
problem. In previous studies, a matrix two-factorization algorithm based on the
tropical semiring has been applied to investigate bipartite and tripartite
networks. Tri-factorization algorithms based on standard linear algebra are
used for solving tasks such as data fusion, co-clustering, matrix completion,
community detection, and more. However, there is currently no tropical matrix
tri-factorization approach, which would allow for the analysis of multipartite
networks with a high number of parts. To address this, we propose the
triFastSTMF algorithm, which performs tri-factorization over the tropical
semiring. We apply it to analyze a four-partition network structure and recover
the edge lengths of the network. We show that triFastSTMF performs similarly to
Fast-NMTF in terms of approximation and prediction performance when fitted on
the whole network. When trained on a specific subnetwork and used to predict
the whole network, triFastSTMF outperforms Fast-NMTF by several orders of
magnitude smaller error. The robustness of triFastSTMF is due to tropical
operations, which are less prone to predict large values compared to standard
operations. | [
"cs.LG"
] | false |
2305.06753 | 2023-05-11T12:19:30Z | Comparison of Clustering Algorithms for Statistical Features of
Vibration Data Sets | [
"Philipp Sepin",
"Jana Kemnitz",
"Safoura Rezapour Lakani",
"Daniel Schall"
] | Vibration-based condition monitoring systems are receiving increasing
attention due to their ability to accurately identify different conditions by
capturing dynamic features over a broad frequency range. However, there is
little research on clustering approaches in vibration data and the resulting
solutions are often optimized for a single data set. In this work, we present
an extensive comparison of the clustering algorithms K-means clustering,
OPTICS, and Gaussian mixture model clustering (GMM) applied to statistical
features extracted from the time and frequency domains of vibration data sets.
Furthermore, we investigate the influence of feature combinations, feature
selection using principal component analysis (PCA), and the specified number of
clusters on the performance of the clustering algorithms. We conducted this
comparison in terms of a grid search using three different benchmark data sets.
Our work showed that averaging (Mean, Median) and variance-based features
(Standard Deviation, Interquartile Range) performed significantly better than
shape-based features (Skewness, Kurtosis). In addition, K-means outperformed
GMM slightly for these data sets, whereas OPTICS performed significantly worse.
We were also able to show that feature combinations as well as PCA feature
selection did not result in any significant performance improvements. With an
increase in the specified number of clusters, clustering algorithms performed
better, although there were some specific algorithmic restrictions. | [
"cs.LG"
] | false |
2305.06939 | 2023-05-11T16:17:43Z | Deep Multi-View Subspace Clustering with Anchor Graph | [
"Chenhang Cui",
"Yazhou Ren",
"Jingyu Pu",
"Xiaorong Pu",
"Lifang He"
] | Deep multi-view subspace clustering (DMVSC) has recently attracted increasing
attention due to its promising performance. However, existing DMVSC methods
still have two issues: (1) they mainly focus on using autoencoders to
nonlinearly embed the data, while the embedding may be suboptimal for
clustering because the clustering objective is rarely considered in
autoencoders, and (2) existing methods typically have a quadratic or even cubic
complexity, which makes it challenging to deal with large-scale data. To
address these issues, in this paper we propose a novel deep multi-view subspace
clustering method with anchor graph (DMCAG). To be specific, DMCAG firstly
learns the embedded features for each view independently, which are used to
obtain the subspace representations. To significantly reduce the complexity, we
construct an anchor graph with small size for each view. Then, spectral
clustering is performed on an integrated anchor graph to obtain pseudo-labels.
To overcome the negative impact caused by suboptimal embedded features, we use
pseudo-labels to refine the embedding process to make it more suitable for the
clustering task. Pseudo-labels and embedded features are updated alternately.
Furthermore, we design a strategy to keep the consistency of the labels based
on contrastive learning to enhance the clustering performance. Empirical
studies on real-world datasets show that our method achieves superior
clustering performance over other state-of-the-art methods. | [
"cs.LG"
] | false |
2305.07037 | 2023-05-11T11:54:36Z | Rethink Depth Separation with Intra-layer Links | [
"Feng-Lei Fan",
"Ze-Yu Li",
"Huan Xiong",
"Tieyong Zeng"
] | The depth separation theory is nowadays widely accepted as an effective
explanation for the power of depth, which consists of two parts: i) there
exists a function representable by a deep network; ii) such a function cannot
be represented by a shallow network whose width is lower than a threshold.
However, this theory is established for feedforward networks. Few studies, if
not none, considered the depth separation theory in the context of shortcuts
which are the most common network types in solving real-world problems. Here,
we find that adding intra-layer links can modify the depth separation theory.
First, we report that adding intra-layer links can greatly improve a network's
representation capability through bound estimation, explicit construction, and
functional space analysis. Then, we modify the depth separation theory by
showing that a shallow network with intra-layer links does not need to go as
wide as before to express some hard functions constructed by a deep network.
Such functions include the renowned "sawtooth" functions. Moreover, the saving
of width is up to linear. Our results supplement the existing depth separation
theory by examining its limit in the shortcut domain. Also, the mechanism we
identify can be translated into analyzing the expressivity of popular shortcut
networks such as ResNet and DenseNet, \textit{e.g.}, residual connections
empower a network to represent a sawtooth function efficiently. | [
"cs.LG"
] | false |
2305.07138 | 2023-05-11T21:03:34Z | Promise and Limitations of Supervised Optimal Transport-Based Graph
Summarization via Information Theoretic Measures | [
"Sepideh Neshatfar",
"Abram Magner",
"Salimeh Yasaei Sekeh"
] | Graph summarization is the problem of producing smaller graph representations
of an input graph dataset, in such a way that the smaller compressed graphs
capture relevant structural information for downstream tasks. There is a recent
graph summarization method that formulates an optimal transport-based framework
that allows prior information about node, edge, and attribute importance (never
defined in that work) to be incorporated into the graph summarization process.
However, very little is known about the statistical properties of this
framework. To elucidate this question, we consider the problem of supervised
graph summarization, wherein by using information theoretic measures we seek to
preserve relevant information about a class label. To gain a theoretical
perspective on the supervised summarization problem itself, we first formulate
it in terms of maximizing the Shannon mutual information between the summarized
graph and the class label. We show an NP-hardness of approximation result for
this problem, thereby constraining what one should expect from proposed
solutions. We then propose a summarization method that incorporates mutual
information estimates between random variables associated with sample graphs
and class labels into the optimal transport compression framework. We
empirically show performance improvements over previous works in terms of
classification accuracy and time on synthetic and certain real datasets. We
also theoretically explore the limitations of the optimal transport approach
for the supervised summarization problem and we show that it fails to satisfy a
certain desirable information monotonicity property. | [
"cs.LG"
] | false |
2305.07170 | 2023-05-11T22:50:41Z | Towards Understanding and Improving GFlowNet Training | [
"Max W. Shen",
"Emmanuel Bengio",
"Ehsan Hajiramezanali",
"Andreas Loukas",
"Kyunghyun Cho",
"Tommaso Biancalani"
] | Generative flow networks (GFlowNets) are a family of algorithms that learn a
generative policy to sample discrete objects $x$ with non-negative reward
$R(x)$. Learning objectives guarantee the GFlowNet samples $x$ from the target
distribution $p^*(x) \propto R(x)$ when loss is globally minimized over all
states or trajectories, but it is unclear how well they perform with practical
limits on training resources. We introduce an efficient evaluation strategy to
compare the learned sampling distribution to the target reward distribution. As
flows can be underdetermined given training data, we clarify the importance of
learned flows to generalization and matching $p^*(x)$ in practice. We
investigate how to learn better flows, and propose (i) prioritized replay
training of high-reward $x$, (ii) relative edge flow policy parametrization,
and (iii) a novel guided trajectory balance objective, and show how it can
solve a substructure credit assignment problem. We substantially improve sample
efficiency on biochemical design tasks. | [
"cs.LG"
] | false |
2305.07670 | 2023-05-11T14:40:39Z | Liver Infection Prediction Analysis using Machine Learning to Evaluate
Analytical Performance in Neural Networks by Optimization Techniques | [
"P. Deivendran",
"S. Selvakanmani",
"S. Jegadeesan",
"V. Vinoth Kumar"
] | Liver infection is a common disease, which poses a great threat to human
health, but there is still able to identify an optimal technique that can be
used on large-level screening. This paper deals with ML algorithms using
different data sets and predictive analyses. Therefore, machine ML can be
utilized in different diseases for integrating a piece of pattern for
visualization. This paper deals with various machine learning algorithms on
different liver illness datasets to evaluate the analytical performance using
different types of parameters and optimization techniques. The selected
classification algorithms analyze the difference in results and find out the
most excellent categorization models for liver disease. Machine learning
optimization is the procedure of modifying hyperparameters in arrange to employ
one of the optimization approaches to minimise the cost function. To set the
hyperparameter, include a number of Phosphotase,Direct Billirubin, Protiens,
Albumin and Albumin Globulin. Since it describes the difference linking the
predictable parameter's true importance and the model's prediction, it is
crucial to minimise the cost function. | [
"cs.LG"
] | false |
2305.06531 | 2023-05-11T02:35:16Z | Semantic Random Walk for Graph Representation Learning in Attributed
Graphs | [
"Meng Qin"
] | In this study, we focus on the graph representation learning (a.k.a. network
embedding) in attributed graphs. Different from existing embedding methods that
treat the incorporation of graph structure and semantic as the simple
combination of two optimization objectives, we propose a novel semantic graph
representation (SGR) method to formulate the joint optimization of the two
heterogeneous sources into a common high-order proximity based framework.
Concretely, we first construct an auxiliary weighted graph, where the complex
homogeneous and heterogeneous relations among nodes and attributes in the
original graph are comprehensively encoded. Conventional embedding methods that
consider high-order topology proximities can then be easily applied to the
newly constructed graph to learn the representations of both node and attribute
while capturing the nonlinear high-order intrinsic correlation inside or among
graph structure and semantic. The learned attribute embeddings can also
effectively support some semantic-oriented inference tasks (e.g., semantic
community detection), helping to reveal the graph's deep semantic. The
effectiveness of SGR is further verified on a series of real graphs, where it
achieves impressive performance over other baselines. | [
"cs.SI",
"cs.LG"
] | false |
2305.06576 | 2023-05-11T05:20:41Z | Clustering of Time-Varying Graphs Based on Temporal Label Smoothness | [
"Katsuki Fukumoto",
"Koki Yamada",
"Yuichi Tanaka",
"Hoi-To Wai"
] | We propose a node clustering method for time-varying graphs based on the
assumption that the cluster labels are changed smoothly over time. Clustering
is one of the fundamental tasks in many science and engineering fields
including signal processing, machine learning, and data mining. Although most
existing studies focus on the clustering of nodes in static graphs, we often
encounter time-varying graphs for time-series data, e.g., social networks,
brain functional connectivity, and point clouds. In this paper, we formulate a
node clustering of time-varying graphs as an optimization problem based on
spectral clustering, with a smoothness constraint of the node labels. We solve
the problem with a primal-dual splitting algorithm. Experiments on synthetic
and real-world time-varying graphs are performed to validate the effectiveness
of the proposed approach. | [
"cs.LG",
"eess.SP"
] | false |
2305.06625 | 2023-05-11T07:54:11Z | Dropout Regularization in Extended Generalized Linear Models based on
Double Exponential Families | [
"Benedikt Lütke Schwienhorst",
"Lucas Kock",
"David J. Nott",
"Nadja Klein"
] | Even though dropout is a popular regularization technique, its theoretical
properties are not fully understood. In this paper we study dropout
regularization in extended generalized linear models based on double
exponential families, for which the dispersion parameter can vary with the
features. A theoretical analysis shows that dropout regularization prefers rare
but important features in both the mean and dispersion, generalizing an earlier
result for conventional generalized linear models. Training is performed using
stochastic gradient descent with adaptive learning rate. To illustrate, we
apply dropout to adaptive smoothing with B-splines, where both the mean and
dispersion parameters are modelled flexibly. The important B-spline basis
functions can be thought of as rare features, and we confirm in experiments
that dropout is an effective form of regularization for mean and dispersion
parameters that improves on a penalized maximum likelihood approach with an
explicit smoothness penalty. | [
"stat.ML",
"cs.LG"
] | false |
2305.06745 | 2023-05-11T12:05:40Z | Investigating the generative dynamics of energy-based neural networks | [
"Lorenzo Tausani",
"Alberto Testolin",
"Marco Zorzi"
] | Generative neural networks can produce data samples according to the
statistical properties of their training distribution. This feature can be used
to test modern computational neuroscience hypotheses suggesting that
spontaneous brain activity is partially supported by top-down generative
processing. A widely studied class of generative models is that of Restricted
Boltzmann Machines (RBMs), which can be used as building blocks for
unsupervised deep learning architectures. In this work, we systematically
explore the generative dynamics of RBMs, characterizing the number of states
visited during top-down sampling and investigating whether the heterogeneity of
visited attractors could be increased by starting the generation process from
biased hidden states. By considering an RBM trained on a classic dataset of
handwritten digits, we show that the capacity to produce diverse data
prototypes can be increased by initiating top-down sampling from chimera
states, which encode high-level visual features of multiple digits. We also
found that the model is not capable of transitioning between all possible digit
states within a single generation trajectory, suggesting that the top-down
dynamics is heavily constrained by the shape of the energy function. | [
"cs.NE",
"cs.LG"
] | false |
2305.06894 | 2023-05-11T15:30:54Z | Reinterpreting causal discovery as the task of predicting unobserved
joint statistics | [
"Dominik Janzing",
"Philipp M. Faller",
"Leena Chennuru Vankadara"
] | If $X,Y,Z$ denote sets of random variables, two different data sources may
contain samples from $P_{X,Y}$ and $P_{Y,Z}$, respectively. We argue that
causal discovery can help inferring properties of the `unobserved joint
distributions' $P_{X,Y,Z}$ or $P_{X,Z}$. The properties may be conditional
independences (as in `integrative causal inference') or also quantitative
statements about dependences.
More generally, we define a learning scenario where the input is a subset of
variables and the label is some statistical property of that subset. Sets of
jointly observed variables define the training points, while unobserved sets
are possible test points. To solve this learning task, we infer, as an
intermediate step, a causal model from the observations that then entails
properties of unobserved sets. Accordingly, we can define the VC dimension of a
class of causal models and derive generalization bounds for the predictions.
Here, causal discovery becomes more modest and better accessible to empirical
tests than usual: rather than trying to find a causal hypothesis that is `true'
a causal hypothesis is {\it useful} whenever it correctly predicts statistical
properties of unobserved joint distributions. This way, a sparse causal graph
that omits weak influences may be more useful than a dense one (despite being
less accurate) because it is able to reconstruct the full joint distribution
from marginal distributions of smaller subsets.
Within such a `pragmatic' application of causal discovery, some popular
heuristic approaches become justified in retrospect. It is, for instance,
allowed to infer DAGs from partial correlations instead of conditional
independences if the DAGs are only used to predict partial correlations. | [
"stat.ML",
"cs.LG"
] | false |
2305.06994 | 2023-05-11T17:30:12Z | A statistical approach to detect sensitive features in a group fairness
setting | [
"Guilherme Dean Pelegrina",
"Miguel Couceiro",
"Leonardo Tomazeli Duarte"
] | The use of machine learning models in decision support systems with high
societal impact raised concerns about unfair (disparate) results for different
groups of people. When evaluating such unfair decisions, one generally relies
on predefined groups that are determined by a set of features that are
considered sensitive. However, such an approach is subjective and does not
guarantee that these features are the only ones to be considered as sensitive
nor that they entail unfair (disparate) outcomes.
In this paper, we propose a preprocessing step to address the task of
automatically recognizing sensitive features that does not require a trained
model to verify unfair results. Our proposal is based on the Hilber-Schmidt
independence criterion, which measures the statistical dependence of variable
distributions. We hypothesize that if the dependence between the label vector
and a candidate is high for a sensitive feature, then the information provided
by this feature will entail disparate performance measures between groups. Our
empirical results attest our hypothesis and show that several features
considered as sensitive in the literature do not necessarily entail disparate
(unfair) results. | [
"cs.LG",
"cs.CY"
] | false |
2305.07036 | 2023-05-11T01:51:36Z | GFlowNets with Human Feedback | [
"Yinchuan Li",
"Shuang Luo",
"Yunfeng Shao",
"Jianye Hao"
] | We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve
the exploration ability when training AI models. For tasks where the reward is
unknown, we fit the reward function through human evaluations on different
trajectories. The goal of GFlowHF is to learn a policy that is strictly
proportional to human ratings, instead of only focusing on human favorite
ratings like RLHF. Experiments show that GFlowHF can achieve better exploration
ability than RLHF. | [
"cs.LG",
"cs.AI"
] | false |
2305.07038 | 2023-05-11T11:57:00Z | Revealing Patterns of Symptomatology in Parkinson's Disease: A Latent
Space Analysis with 3D Convolutional Autoencoders | [
"E. Delgado de las Heras",
"F. J. Martinez-Murcia",
"I. A. Illán",
"C. Jiménez-Mesa",
"D. Castillo-Barnes",
"J. Ramírez",
"J. M. Górriz"
] | This work proposes the use of 3D convolutional variational autoencoders
(CVAEs) to trace the changes and symptomatology produced by neurodegeneration
in Parkinson's disease (PD). In this work, we present a novel approach to
detect and quantify changes in dopamine transporter (DaT) concentration and its
spatial patterns using 3D CVAEs on Ioflupane (FPCIT) imaging. Our approach
leverages the power of deep learning to learn a low-dimensional representation
of the brain imaging data, which then is linked to different symptom categories
using regression algorithms. We demonstrate the effectiveness of our approach
on a dataset of PD patients and healthy controls, and show that general
symptomatology (UPDRS) is linked to a d-dimensional decomposition via the CVAE
with R2>0.25. Our work shows the potential of representation learning not only
in early diagnosis but in understanding neurodegeneration processes and
symptomatology. | [
"eess.IV",
"cs.LG"
] | false |
2305.07040 | 2023-05-11T13:21:26Z | Sequential Experimental Design for Spectral Measurement: Active Learning
Using a Parametric Model | [
"Tomohiro Nabika",
"Kenji Nagata",
"Shun Katakami",
"Masaichiro Mizumaki",
"Masato Okada"
] | In this study, we demonstrate a sequential experimental design for spectral
measurements by active learning using parametric models as predictors. In
spectral measurements, it is necessary to reduce the measurement time because
of sample fragility and high energy costs. To improve the efficiency of
experiments, sequential experimental designs are proposed, in which the
subsequent measurement is designed by active learning using the data obtained
before the measurement. Conventionally, parametric models are employed in data
analysis; when employed for active learning, they are expected to afford a
sequential experimental design that improves the accuracy of data analysis.
However, due to the complexity of the formulas, a sequential experimental
design using general parametric models has not been realized. Therefore, we
applied Bayesian inference-based data analysis using the exchange Monte Carlo
method to realize a sequential experimental design with general parametric
models. In this study, we evaluated the effectiveness of the proposed method by
applying it to Bayesian spectral deconvolution and Bayesian Hamiltonian
selection in X-ray photoelectron spectroscopy. Using numerical experiments with
artificial data, we demonstrated that the proposed method improves the accuracy
of model selection and parameter estimation while reducing the measurement time
compared with the results achieved without active learning or with active
learning using the Gaussian process regression. | [
"cs.LG",
"physics.data-an"
] | false |
2305.07141 | 2023-05-11T21:06:39Z | The ConceptARC Benchmark: Evaluating Understanding and Generalization in
the ARC Domain | [
"Arseny Moskvichev",
"Victor Vikram Odouard",
"Melanie Mitchell"
] | The abilities to form and abstract concepts is key to human intelligence, but
such abilities remain lacking in state-of-the-art AI systems. There has been
substantial research on conceptual abstraction in AI, particularly using
idealized domains such as Raven's Progressive Matrices and Bongard problems,
but even when AI systems succeed on such problems, the systems are rarely
evaluated in depth to see if they have actually grasped the concepts they are
meant to capture.
In this paper we describe an in-depth evaluation benchmark for the
Abstraction and Reasoning Corpus (ARC), a collection of few-shot abstraction
and analogy problems developed by Chollet [2019]. In particular, we describe
ConceptARC, a new, publicly available benchmark in the ARC domain that
systematically assesses abstraction and generalization abilities on a number of
basic spatial and semantic concepts. ConceptARC differs from the original ARC
dataset in that it is specifically organized around "concept groups" -- sets of
problems that focus on specific concepts and that are vary in complexity and
level of abstraction. We report results on testing humans on this benchmark as
well as three machine solvers: the top two programs from a 2021 ARC competition
and OpenAI's GPT-4. Our results show that humans substantially outperform the
machine solvers on this benchmark, showing abilities to abstract and generalize
concepts that are not yet captured by AI systems. We believe that this
benchmark will spur improvements in the development of AI systems for
conceptual abstraction and in the effective evaluation of such systems. | [
"cs.LG",
"cs.AI"
] | false |
2305.07145 | 2023-05-11T21:23:37Z | Enhancing Petrophysical Studies with Machine Learning: A Field Case
Study on Permeability Prediction in Heterogeneous Reservoirs | [
"Fethi Ali Cheddad"
] | This field case study aims to address the challenge of accurately predicting
petrophysical properties in heterogeneous reservoir formations, which can
significantly impact reservoir performance predictions. The study employed
three machine learning algorithms, namely Artificial Neural Network (ANN),
Random Forest Classifier (RFC), and Support Vector Machine (SVM), to predict
permeability log from conventional logs and match it with core data. The
primary objective of this study was to compare the effectiveness of the three
machine learning algorithms in predicting permeability and determine the
optimal prediction method. The study utilized the Flow Zone Indicator (FZI)
rock typing technique to understand the factors influencing reservoir quality.
The findings will be used to improve reservoir simulation and locate future
wells more accurately. The study concluded that the FZI approach and machine
learning algorithms are effective in predicting permeability log and improving
reservoir performance predictions. | [
"physics.geo-ph",
"cs.LG"
] | false |
2305.07671 | 2023-05-11T16:54:17Z | LatentPINNs: Generative physics-informed neural networks via a latent
representation learning | [
"Mohammad H. Taufik",
"Tariq Alkhalifah"
] | Physics-informed neural networks (PINNs) are promising to replace
conventional partial differential equation (PDE) solvers by offering more
accurate and flexible PDE solutions. However, they are hampered by the
relatively slow convergence and the need to perform additional, potentially
expensive, training for different PDE parameters. To solve this limitation, we
introduce latentPINN, a framework that utilizes latent representations of the
PDE parameters as additional (to the coordinates) inputs into PINNs and allows
for training over the distribution of these parameters. Motivated by the recent
progress on generative models, we promote the use of latent diffusion models to
learn compressed latent representations of the PDE parameters distribution and
act as input parameters to NN functional solutions. We use a two-stage training
scheme in which the first stage, we learn the latent representations for the
distribution of PDE parameters. In the second stage, we train a
physics-informed neural network over inputs given by randomly drawn samples
from the coordinate space within the solution domain and samples from the
learned latent representation of the PDE parameters. We test the approach on a
class of level set equations given by the nonlinear Eikonal equation. We
specifically share results corresponding to three different sets of Eikonal
parameters (velocity models). The proposed method performs well on new phase
velocity models without the need for any additional training. | [
"cs.LG",
"physics.comp-ph"
] | false |
2305.09678 | 2023-05-11T14:52:19Z | Anomaly Detection Dataset for Industrial Control Systems | [
"Alireza Dehlaghi-Ghadim",
"Mahshid Helali Moghadam",
"Ali Balador",
"Hans Hansson"
] | Over the past few decades, Industrial Control Systems (ICSs) have been
targeted by cyberattacks and are becoming increasingly vulnerable as more ICSs
are connected to the internet. Using Machine Learning (ML) for Intrusion
Detection Systems (IDS) is a promising approach for ICS cyber protection, but
the lack of suitable datasets for evaluating ML algorithms is a challenge.
Although there are a few commonly used datasets, they may not reflect realistic
ICS network data, lack necessary features for effective anomaly detection, or
be outdated. This paper presents the 'ICS-Flow' dataset, which offers network
data and process state variables logs for supervised and unsupervised ML-based
IDS assessment. The network data includes normal and anomalous network packets
and flows captured from simulated ICS components and emulated networks. The
anomalies were injected into the system through various attack techniques
commonly used by hackers to modify network traffic and compromise ICSs. We also
proposed open-source tools, `ICSFlowGenerator' for generating network flow
parameters from Raw network packets. The final dataset comprises over
25,000,000 raw network packets, network flow records, and process variable
logs. The paper describes the methodology used to collect and label the dataset
and provides a detailed data analysis. Finally, we implement several ML models,
including the decision tree, random forest, and artificial neural network to
detect anomalies and attacks, demonstrating that our dataset can be used
effectively for training intrusion detection ML models. | [
"cs.CR",
"cs.LG"
] | false |
2305.14361 | 2023-05-11T19:02:09Z | Criticality Analysis: Bio-inspired Nonlinear Data Representation | [
"Tjeerd V. olde Scheper"
] | The representation of arbitrary data in a biological system is one of the
most elusive elements of biological information processing. The often
logarithmic nature of information in amplitude and frequency presented to
biosystems prevents simple encapsulation of the information contained in the
input. Criticality Analysis (CA) is a bio-inspired method of information
representation within a controlled self-organised critical system that allows
scale-free representation. This is based on the concept of a reservoir of
dynamic behaviour in which self-similar data will create dynamic nonlinear
representations. This unique projection of data preserves the similarity of
data within a multidimensional neighbourhood. The input can be reduced
dimensionally to a projection output that retains the features of the overall
data, yet has much simpler dynamic response. The method depends only on the
rate control of chaos applied to the underlying controlled models, that allows
the encoding of arbitrary data, and promises optimal encoding of data given
biological relevant networks of oscillators. The CA method allows for a
biologically relevant encoding mechanism of arbitrary input to biosystems,
creating a suitable model for information processing in varying complexity of
organisms and scale-free data representation for machine learning. | [
"q-bio.NC",
"cs.LG"
] | false |
2305.06541 | 2023-05-11T03:08:49Z | Spectral Clustering on Large Datasets: When Does it Work? Theory from
Continuous Clustering and Density Cheeger-Buser | [
"Timothy Chu",
"Gary Miller",
"Noel Walkington"
] | Spectral clustering is one of the most popular clustering algorithms that has
stood the test of time. It is simple to describe, can be implemented using
standard linear algebra, and often finds better clusters than traditional
clustering algorithms like $k$-means and $k$-centers. The foundational
algorithm for two-way spectral clustering, by Shi and Malik, creates a
geometric graph from data and finds a spectral cut of the graph.
In modern machine learning, many data sets are modeled as a large number of
points drawn from a probability density function. Little is known about when
spectral clustering works in this setting -- and when it doesn't. Past
researchers justified spectral clustering by appealing to the graph Cheeger
inequality (which states that the spectral cut of a graph approximates the
``Normalized Cut''), but this justification is known to break down on large
data sets.
We provide theoretically-informed intuition about spectral clustering on
large data sets drawn from probability densities, by proving when a continuous
form of spectral clustering considered by past researchers (the unweighted
spectral cut of a probability density) finds good clusters of the underlying
density itself. Our work suggests that Shi-Malik spectral clustering works well
on data drawn from mixtures of Laplace distributions, and works poorly on data
drawn from certain other densities, such as a density we call the `square-root
trough'.
Our core theorem proves that weighted spectral cuts have low weighted
isoperimetry for all probability densities. Our key tool is a new Cheeger-Buser
inequality for all probability densities, including discontinuous ones. | [
"cs.LG",
"cs.AI",
"cs.DS",
"math.FA"
] | false |
2305.06584 | 2023-05-11T05:44:36Z | Active Learning in the Predict-then-Optimize Framework: A Margin-Based
Approach | [
"Mo Liu",
"Paul Grigas",
"Heyuan Liu",
"Zuo-Jun Max Shen"
] | We develop the first active learning method in the predict-then-optimize
framework. Specifically, we develop a learning method that sequentially decides
whether to request the "labels" of feature samples from an unlabeled data
stream, where the labels correspond to the parameters of an optimization model
for decision-making. Our active learning method is the first to be directly
informed by the decision error induced by the predicted parameters, which is
referred to as the Smart Predict-then-Optimize (SPO) loss. Motivated by the
structure of the SPO loss, our algorithm adopts a margin-based criterion
utilizing the concept of distance to degeneracy and minimizes a tractable
surrogate of the SPO loss on the collected data. In particular, we develop an
efficient active learning algorithm with both hard and soft rejection variants,
each with theoretical excess risk (i.e., generalization) guarantees. We further
derive bounds on the label complexity, which refers to the number of samples
whose labels are acquired to achieve a desired small level of SPO risk. Under
some natural low-noise conditions, we show that these bounds can be better than
the naive supervised learning approach that labels all samples. Furthermore,
when using the SPO+ loss function, a specialized surrogate of the SPO loss, we
derive a significantly smaller label complexity under separability conditions.
We also present numerical evidence showing the practical value of our proposed
algorithms in the settings of personalized pricing and the shortest path
problem. | [
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2305.06660 | 2023-05-11T08:55:56Z | On the convergence of the MLE as an estimator of the learning rate in
the Exp3 algorithm | [
"Julien Aubert",
"Luc Lehéricy",
"Patricia Reynaud-Bouret"
] | When fitting the learning data of an individual to algorithm-like learning
models, the observations are so dependent and non-stationary that one may
wonder what the classical Maximum Likelihood Estimator (MLE) could do, even if
it is the usual tool applied to experimental cognition. Our objective in this
work is to show that the estimation of the learning rate cannot be efficient if
the learning rate is constant in the classical Exp3 (Exponential weights for
Exploration and Exploitation) algorithm. Secondly, we show that if the learning
rate decreases polynomially with the sample size, then the prediction error and
in some cases the estimation error of the MLE satisfy bounds in probability
that decrease at a polynomial rate. | [
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2305.06703 | 2023-05-11T10:27:59Z | Neural Fine-Gray: Monotonic neural networks for competing risks | [
"Vincent Jeanselme",
"Chang Ho Yoon",
"Brian Tom",
"Jessica Barrett"
] | Time-to-event modelling, known as survival analysis, differs from standard
regression as it addresses censoring in patients who do not experience the
event of interest. Despite competitive performances in tackling this problem,
machine learning methods often ignore other competing risks that preclude the
event of interest. This practice biases the survival estimation. Extensions to
address this challenge often rely on parametric assumptions or numerical
estimations leading to sub-optimal survival approximations. This paper
leverages constrained monotonic neural networks to model each competing
survival distribution. This modelling choice ensures the exact likelihood
maximisation at a reduced computational cost by using automatic
differentiation. The effectiveness of the solution is demonstrated on one
synthetic and three medical datasets. Finally, we discuss the implications of
considering competing risks when developing risk scores for medical practice. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2305.06707 | 2023-05-11T10:33:36Z | A data-driven rutting depth short-time prediction model with
metaheuristic optimization for asphalt pavements based on RIOHTrack | [
"Zhuoxuan Li",
"Iakov Korovin",
"Xinli Shi",
"Sergey Gorbachev",
"Nadezhda Gorbacheva",
"Wei Huang",
"Jinde Cao"
] | Rutting of asphalt pavements is a crucial design criterion in various
pavement design guides. A good road transportation base can provide security
for the transportation of oil and gas in road transportation. This study
attempts to develop a robust artificial intelligence model to estimate
different asphalt pavements' rutting depth clips, temperature, and load axes as
primary characteristics. The experiment data were obtained from 19 asphalt
pavements with different crude oil sources on a 2.038 km long full-scale field
accelerated pavement test track (RIOHTrack, Road Track Institute) in Tongzhou,
Beijing. In addition, this paper also proposes to build complex networks with
different pavement rutting depths through complex network methods and the
Louvain algorithm for community detection. The most critical structural
elements can be selected from different asphalt pavement rutting data, and
similar structural elements can be found. An extreme learning machine algorithm
with residual correction (RELM) is designed and optimized using an independent
adaptive particle swarm algorithm. The experimental results of the proposed
method are compared with several classical machine learning algorithms, with
predictions of Average Root Mean Squared Error, Average Mean Absolute Error,
and Average Mean Absolute Percentage Error for 19 asphalt pavements reaching
1.742, 1.363, and 1.94\% respectively. The experiments demonstrate that the
RELM algorithm has an advantage over classical machine learning methods in
dealing with non-linear problems in road engineering. Notably, the method
ensures the adaptation of the simulated environment to different levels of
abstraction through the cognitive analysis of the production environment
parameters. | [
"cs.AI",
"cs.LG",
"cs.NE"
] | false |
2305.06709 | 2023-05-11T10:34:27Z | NUBO: A Transparent Python Package for Bayesian Optimisation | [
"Mike Diessner",
"Kevin Wilson",
"Richard D. Whalley"
] | NUBO, short for Newcastle University Bayesian Optimisation, is a Bayesian
optimisation framework for the optimisation of expensive-to-evaluate black-box
functions, such as physical experiments and computer simulators. Bayesian
optimisation is a cost-efficient optimisation strategy that uses surrogate
modelling via Gaussian processes to represent an objective function and
acquisition functions to guide the selection of candidate points to approximate
the global optimum of the objective function. NUBO itself focuses on
transparency and user experience to make Bayesian optimisation easily
accessible to researchers from all disciplines. Clean and understandable code,
precise references, and thorough documentation ensure transparency, while user
experience is ensured by a modular and flexible design, easy-to-write syntax,
and careful selection of Bayesian optimisation algorithms. NUBO allows users to
tailor Bayesian optimisation to their specific problem by writing the
optimisation loop themselves using the provided building blocks. It supports
sequential single-point, parallel multi-point, and asynchronous optimisation of
bounded, constrained, and/or mixed (discrete and continuous) parameter input
spaces. Only algorithms and methods that are extensively tested and validated
to perform well are included in NUBO. This ensures that the package remains
compact and does not overwhelm the user with an unnecessarily large number of
options. The package is written in Python but does not require expert knowledge
of Python to optimise your simulators and experiments. NUBO is distributed as
open-source software under the BSD 3-Clause licence. | [
"cs.LG",
"cs.MS",
"stat.ML"
] | false |
2305.06862 | 2023-05-11T15:01:30Z | A General Framework for Visualizing Embedding Spaces of Neural Survival
Analysis Models Based on Angular Information | [
"George H. Chen"
] | We propose a general framework for visualizing any intermediate embedding
representation used by any neural survival analysis model. Our framework is
based on so-called anchor directions in an embedding space. We show how to
estimate these anchor directions using clustering or, alternatively, using
user-supplied "concepts" defined by collections of raw inputs (e.g., feature
vectors all from female patients could encode the concept "female"). For
tabular data, we present visualization strategies that reveal how anchor
directions relate to raw clinical features and to survival time distributions.
We then show how these visualization ideas extend to handling raw inputs that
are images. Our framework is built on looking at angles between vectors in an
embedding space, where there could be "information loss" by ignoring magnitude
information. We show how this loss results in a "clumping" artifact that
appears in our visualizations, and how to reduce this information loss in
practice. | [
"stat.ML",
"cs.HC",
"cs.LG"
] | false |
2305.06865 | 2023-05-11T15:06:08Z | Multi-Tier Client Selection for Mobile Federated Learning Networks | [
"Yulan Gao",
"Yansong Zhao",
"Han Yu"
] | Federated learning (FL), which addresses data privacy issues by training
models on resource-constrained mobile devices in a distributed manner, has
attracted significant research attention. However, the problem of optimizing FL
client selection in mobile federated learning networks (MFLNs), where devices
move in and out of each others' coverage and no FL server knows all the data
owners, remains open. To bridge this gap, we propose a first-of-its-kind
\underline{Soc}ially-aware \underline{Fed}erated \underline{C}lient
\underline{S}election (SocFedCS) approach to minimize costs and train
high-quality FL models. SocFedCS enriches the candidate FL client pool by
enabling data owners to propagate FL task information through their local
networks of trust, even as devices are moving into and out of each others'
coverage. Based on Lyapunov optimization, we first transform this time-coupled
problem into a step-by-step optimization problem. Then, we design a method
based on alternating minimization and self-adaptive global best harmony search
to solve this mixed-integer optimization problem. Extensive experiments
comparing SocFedCS against five state-of-the-art approaches based on four
real-world multimedia datasets demonstrate that it achieves 2.06\% higher test
accuracy and 12.24\% lower cost on average than the best-performing baseline. | [
"cs.LG",
"cs.DC",
"cs.NI"
] | false |
2305.07132 | 2023-05-11T20:50:51Z | Tackling Interpretability in Audio Classification Networks with
Non-negative Matrix Factorization | [
"Jayneel Parekh",
"Sanjeel Parekh",
"Pavlo Mozharovskyi",
"Gaël Richard",
"Florence d'Alché-Buc"
] | This paper tackles two major problem settings for interpretability of audio
processing networks, post-hoc and by-design interpretation. For post-hoc
interpretation, we aim to interpret decisions of a network in terms of
high-level audio objects that are also listenable for the end-user. This is
extended to present an inherently interpretable model with high performance. To
this end, we propose a novel interpreter design that incorporates non-negative
matrix factorization (NMF). In particular, an interpreter is trained to
generate a regularized intermediate embedding from hidden layers of a target
network, learnt as time-activations of a pre-learnt NMF dictionary. Our
methodology allows us to generate intuitive audio-based interpretations that
explicitly enhance parts of the input signal most relevant for a network's
decision. We demonstrate our method's applicability on a variety of
classification tasks, including multi-label data for real-world audio and
music. | [
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2305.07508 | 2023-05-11T08:11:19Z | MolDiff: Addressing the Atom-Bond Inconsistency Problem in 3D Molecule
Diffusion Generation | [
"Xingang Peng",
"Jiaqi Guan",
"Qiang Liu",
"Jianzhu Ma"
] | Deep generative models have recently achieved superior performance in 3D
molecule generation. Most of them first generate atoms and then add chemical
bonds based on the generated atoms in a post-processing manner. However, there
might be no corresponding bond solution for the temporally generated atoms as
their locations are generated without considering potential bonds. We define
this problem as the atom-bond inconsistency problem and claim it is the main
reason for current approaches to generating unrealistic 3D molecules. To
overcome this problem, we propose a new diffusion model called MolDiff which
can generate atoms and bonds simultaneously while still maintaining their
consistency by explicitly modeling the dependence between their relationships.
We evaluated the generation ability of our proposed model and the quality of
the generated molecules using criteria related to both geometry and chemical
properties. The empirical studies showed that our model outperforms previous
approaches, achieving a three-fold improvement in success rate and generating
molecules with significantly better quality. | [
"q-bio.BM",
"cs.LG",
"q-bio.QM"
] | false |
2305.10353 | 2023-05-11T07:28:40Z | An Ensemble Learning Approach for Exercise Detection in Type 1 Diabetes
Patients | [
"Ke Ma",
"Hongkai Chen",
"Shan Lin"
] | Type 1 diabetes is a serious disease in which individuals are unable to
regulate their blood glucose levels, leading to various medical complications.
Artificial pancreas (AP) systems have been developed as a solution for type 1
diabetic patients to mimic the behavior of the pancreas and regulate blood
glucose levels. However, current AP systems lack detection capabilities for
exercise-induced glucose intake, which can last up to 4 to 8 hours. This
incapability can lead to hypoglycemia, which if left untreated, could have
serious consequences, including death. Existing exercise detection methods are
either limited to single sensor data or use inaccurate models for exercise
detection, making them less effective in practice. In this work, we propose an
ensemble learning framework that combines a data-driven physiological model and
a Siamese network to leverage multiple physiological signal streams for
exercise detection with high accuracy. To evaluate the effectiveness of our
proposed approach, we utilized a public dataset with 12 diabetic patients
collected from an 8-week clinical trial. Our approach achieves a true positive
rate for exercise detection of 86.4% and a true negative rate of 99.1%,
outperforming state-of-the-art solutions. | [
"eess.SP",
"cs.LG",
"cs.NI",
"68T07 (Primary) 34A05 (Secondary)",
"J.3"
] | false |
2305.07091 | 2023-05-11T18:54:36Z | Stability and Convergence of Distributed Stochastic Approximations with
large Unbounded Stochastic Information Delays | [
"Adrian Redder",
"Arunselvan Ramaswamy",
"Holger Karl"
] | We generalize the Borkar-Meyn stability Theorem (BMT) to distributed
stochastic approximations (SAs) with information delays that possess an
arbitrary moment bound. To model the delays, we introduce Age of Information
Processes (AoIPs): stochastic processes on the non-negative integers with a
unit growth property. We show that AoIPs with an arbitrary moment bound cannot
exceed any fraction of time infinitely often. In combination with a suitably
chosen stepsize, this property turns out to be sufficient for the stability of
distributed SAs. Compared to the BMT, our analysis requires crucial
modifications and a new line of argument to handle the SA errors caused by AoI.
In our analysis, we show that these SA errors satisfy a recursive inequality.
To evaluate this recursion, we propose a new Gronwall-type inequality for
time-varying lower limits of summations. As applications to our distributed
BMT, we discuss distributed gradient-based optimization and a new approach to
analyzing SAs with momentum. | [
"math.OC",
"cs.DC",
"cs.LG",
"cs.MA",
"math.DS"
] | false |
2305.07308 | 2023-05-12T08:28:58Z | Efficient Search of Comprehensively Robust Neural Architectures via
Multi-fidelity Evaluation | [
"Jialiang Sun",
"Wen Yao",
"Tingsong Jiang",
"Xiaoqian Chen"
] | Neural architecture search (NAS) has emerged as one successful technique to
find robust deep neural network (DNN) architectures. However, most existing
robustness evaluations in NAS only consider $l_{\infty}$ norm-based adversarial
noises. In order to improve the robustness of DNN models against multiple types
of noises, it is necessary to consider a comprehensive evaluation in NAS for
robust architectures. But with the increasing number of types of robustness
evaluations, it also becomes more time-consuming to find comprehensively robust
architectures. To alleviate this problem, we propose a novel efficient search
of comprehensively robust neural architectures via multi-fidelity evaluation
(ES-CRNA-ME). Specifically, we first search for comprehensively robust
architectures under multiple types of evaluations using the
weight-sharing-based NAS method, including different $l_{p}$ norm attacks,
semantic adversarial attacks, and composite adversarial attacks. In addition,
we reduce the number of robustness evaluations by the correlation analysis,
which can incorporate similar evaluations and decrease the evaluation cost.
Finally, we propose a multi-fidelity online surrogate during optimization to
further decrease the search cost. On the basis of the surrogate constructed by
low-fidelity data, the online high-fidelity data is utilized to finetune the
surrogate. Experiments on CIFAR10 and CIFAR100 datasets show the effectiveness
of our proposed method. | [
"cs.CV"
] | false |
2305.07328 | 2023-05-12T09:03:38Z | Configurable Spatial-Temporal Hierarchical Analysis for Flexible Video
Anomaly Detection | [
"Kai Cheng",
"Xinhua Zeng",
"Yang Liu",
"Tian Wang",
"Chengxin Pang",
"Jing Teng",
"Zhaoyang Xia",
"Jing Liu"
] | Video anomaly detection (VAD) is a vital task with great practical
applications in industrial surveillance, security system, and traffic control.
Unlike previous unsupervised VAD methods that adopt a fixed structure to learn
normality without considering different detection demands, we design a
spatial-temporal hierarchical architecture (STHA) as a configurable
architecture to flexibly detect different degrees of anomaly. The comprehensive
structure of the STHA is delineated into a tripartite hierarchy, encompassing
the following tiers: the stream level, the stack level, and the block level.
Specifically, we design several auto-encoder-based blocks that possess varying
capacities for extracting normal patterns. Then, we stack blocks according to
the complexity degrees with both intra-stack and inter-stack residual links to
learn hierarchical normality gradually. Considering the multisource knowledge
of videos, we also model the spatial normality of video frames and temporal
normality of RGB difference by designing two parallel streams consisting of
stacks. Thus, STHA can provide various representation learning abilities by
expanding or contracting hierarchically to detect anomalies of different
degrees. Since the anomaly set is complicated and unbounded, our STHA can
adjust its detection ability to adapt to the human detection demands and the
complexity degree of anomaly that happened in the history of a scene. We
conduct experiments on three benchmarks and perform extensive analysis, and the
results demonstrate that our method performs comparablely to the
state-of-the-art methods. In addition, we design a toy dataset to prove that
our model can better balance the learning ability to adapt to different
detection demands. | [
"cs.CV"
] | false |
2305.07342 | 2023-05-12T09:39:08Z | BundleRecon: Ray Bundle-Based 3D Neural Reconstruction | [
"Weikun Zhang",
"Jianke Zhu"
] | With the growing popularity of neural rendering, there has been an increasing
number of neural implicit multi-view reconstruction methods. While many models
have been enhanced in terms of positional encoding, sampling, rendering, and
other aspects to improve the reconstruction quality, current methods do not
fully leverage the information among neighboring pixels during the
reconstruction process. To address this issue, we propose an enhanced model
called BundleRecon. In the existing approaches, sampling is performed by a
single ray that corresponds to a single pixel. In contrast, our model samples a
patch of pixels using a bundle of rays, which incorporates information from
neighboring pixels. Furthermore, we design bundle-based constraints to further
improve the reconstruction quality. Experimental results demonstrate that
BundleRecon is compatible with the existing neural implicit multi-view
reconstruction methods and can improve their reconstruction quality. | [
"cs.CV"
] | false |
2305.07397 | 2023-05-12T11:48:32Z | Learning Monocular Depth in Dynamic Environment via Context-aware
Temporal Attention | [
"Zizhang Wu",
"Zhuozheng Li",
"Zhi-Gang Fan",
"Yunzhe Wu",
"Yuanzhu Gan",
"Jian Pu",
"Xianzhi Li"
] | The monocular depth estimation task has recently revealed encouraging
prospects, especially for the autonomous driving task. To tackle the ill-posed
problem of 3D geometric reasoning from 2D monocular images, multi-frame
monocular methods are developed to leverage the perspective correlation
information from sequential temporal frames. However, moving objects such as
cars and trains usually violate the static scene assumption, leading to feature
inconsistency deviation and misaligned cost values, which would mislead the
optimization algorithm. In this work, we present CTA-Depth, a Context-aware
Temporal Attention guided network for multi-frame monocular Depth estimation.
Specifically, we first apply a multi-level attention enhancement module to
integrate multi-level image features to obtain an initial depth and pose
estimation. Then the proposed CTA-Refiner is adopted to alternatively optimize
the depth and pose. During the refinement process, context-aware temporal
attention (CTA) is developed to capture the global temporal-context
correlations to maintain the feature consistency and estimation integrity of
moving objects. In particular, we propose a long-range geometry embedding (LGE)
module to produce a long-range temporal geometry prior. Our approach achieves
significant improvements over state-of-the-art approaches on three benchmark
datasets. | [
"cs.CV"
] | false |
2305.07540 | 2023-05-12T15:06:17Z | Content-based jewellery item retrieval using the local region-based
histograms | [
"Amin Muhammad Shoib",
"Summaira Jabeen",
"Changbo Wang",
"Tassawar Ali"
] | Jewellery item retrieval is regularly used to find what people want on online
marketplaces using a sample query reference image. Considering recent
developments, due to the simultaneous nature of various jewelry items, various
jewelry goods' occlusion in images or visual streams, as well as shape
deformation, content-based jewellery item retrieval (CBJIR) still has
limitations whenever it pertains to visual searching in the actual world. This
article proposed a content-based jewellery item retrieval method using the
local region-based histograms in HSV color space. Using five local regions, our
novel jewellery classification module extracts the specific feature vectors
from the query image. The jewellery classification module is also applied to
the jewellery database to extract feature vectors. Finally, the similarity
score is matched between the database and query features vectors to retrieve
the jewellery items from the database. The proposed method performance is
tested on publicly available jewellery item retrieval datasets, i.e. ringFIR
and Fashion Product Images dataset. The experimental results demonstrate the
dominance of the proposed method over the baseline methods for retrieving
desired jewellery products. | [
"cs.CV"
] | false |
2305.07602 | 2023-05-12T16:51:14Z | ViT Unified: Joint Fingerprint Recognition and Presentation Attack
Detection | [
"Steven A. Grosz",
"Kanishka P. Wijewardena",
"Anil K. Jain"
] | A secure fingerprint recognition system must contain both a presentation
attack (i.e., spoof) detection and recognition module in order to protect users
against unwanted access by malicious users. Traditionally, these tasks would be
carried out by two independent systems; however, recent studies have
demonstrated the potential to have one unified system architecture in order to
reduce the computational burdens on the system, while maintaining high
accuracy. In this work, we leverage a vision transformer architecture for joint
spoof detection and matching and report competitive results with
state-of-the-art (SOTA) models for both a sequential system (two ViT models
operating independently) and a unified architecture (a single ViT model for
both tasks). ViT models are particularly well suited for this task as the ViT's
global embedding encodes features useful for recognition, whereas the
individual, local embeddings are useful for spoof detection. We demonstrate the
capability of our unified model to achieve an average integrated matching (IM)
accuracy of 98.87% across LivDet 2013 and 2015 CrossMatch sensors. This is
comparable to IM accuracy of 98.95% of our sequential dual-ViT system, but with
~50% of the parameters and ~58% of the latency. | [
"cs.CV"
] | false |
2305.07713 | 2023-05-12T18:08:51Z | Multi-Modal 3D Object Detection by Box Matching | [
"Zhe Liu",
"Xiaoqing Ye",
"Zhikang Zou",
"Xinwei He",
"Xiao Tan",
"Errui Ding",
"Jingdong Wang",
"Xiang Bai"
] | Multi-modal 3D object detection has received growing attention as the
information from different sensors like LiDAR and cameras are complementary.
Most fusion methods for 3D detection rely on an accurate alignment and
calibration between 3D point clouds and RGB images. However, such an assumption
is not reliable in a real-world self-driving system, as the alignment between
different modalities is easily affected by asynchronous sensors and disturbed
sensor placement. We propose a novel {F}usion network by {B}ox {M}atching
(FBMNet) for multi-modal 3D detection, which provides an alternative way for
cross-modal feature alignment by learning the correspondence at the bounding
box level to free up the dependency of calibration during inference. With the
learned assignments between 3D and 2D object proposals, the fusion for
detection can be effectively performed by combing their ROI features. Extensive
experiments on the nuScenes dataset demonstrate that our method is much more
stable in dealing with challenging cases such as asynchronous sensors,
misaligned sensor placement, and degenerated camera images than existing fusion
methods. We hope that our FBMNet could provide an available solution to dealing
with these challenging cases for safety in real autonomous driving scenarios.
Codes will be publicly available at https://github.com/happinesslz/FBMNet. | [
"cs.CV"
] | false |
2305.07214 | 2023-05-12T03:05:40Z | MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition | [
"Xinyu Gong",
"Sreyas Mohan",
"Naina Dhingra",
"Jean-Charles Bazin",
"Yilei Li",
"Zhangyang Wang",
"Rakesh Ranjan"
] | In this paper, we study a novel problem in egocentric action recognition,
which we term as "Multimodal Generalization" (MMG). MMG aims to study how
systems can generalize when data from certain modalities is limited or even
completely missing. We thoroughly investigate MMG in the context of standard
supervised action recognition and the more challenging few-shot setting for
learning new action categories. MMG consists of two novel scenarios, designed
to support security, and efficiency considerations in real-world applications:
(1) missing modality generalization where some modalities that were present
during the train time are missing during the inference time, and (2)
cross-modal zero-shot generalization, where the modalities present during the
inference time and the training time are disjoint. To enable this
investigation, we construct a new dataset MMG-Ego4D containing data points with
video, audio, and inertial motion sensor (IMU) modalities. Our dataset is
derived from Ego4D dataset, but processed and thoroughly re-annotated by human
experts to facilitate research in the MMG problem. We evaluate a diverse array
of models on MMG-Ego4D and propose new methods with improved generalization
ability. In particular, we introduce a new fusion module with modality dropout
training, contrastive-based alignment training, and a novel cross-modal
prototypical loss for better few-shot performance. We hope this study will
serve as a benchmark and guide future research in multimodal generalization
problems. The benchmark and code will be available at
https://github.com/facebookresearch/MMG_Ego4D. | [
"cs.CV",
"cs.AI"
] | true |
2305.07257 | 2023-05-12T05:26:55Z | A Central Asian Food Dataset for Personalized Dietary Interventions,
Extended Abstract | [
"Aknur Karabay",
"Arman Bolatov",
"Huseyin Atakan Varol",
"Mei-Yen Chan"
] | Nowadays, it is common for people to take photographs of every beverage,
snack, or meal they eat and then post these photographs on social media
platforms. Leveraging these social trends, real-time food recognition and
reliable classification of these captured food images can potentially help
replace some of the tedious recording and coding of food diaries to enable
personalized dietary interventions. Although Central Asian cuisine is
culturally and historically distinct, there has been little published data on
the food and dietary habits of people in this region. To fill this gap, we aim
to create a reliable dataset of regional foods that is easily accessible to
both public consumers and researchers. To the best of our knowledge, this is
the first work on creating a Central Asian Food Dataset (CAFD). The final
dataset contains 42 food categories and over 16,000 images of national dishes
unique to this region. We achieved a classification accuracy of 88.70\% (42
classes) on the CAFD using the ResNet152 neural network model. The food
recognition models trained on the CAFD demonstrate computer vision's
effectiveness and high accuracy for dietary assessment. | [
"cs.CV",
"cs.LG"
] | false |
2305.07299 | 2023-05-12T08:10:14Z | An Object SLAM Framework for Association, Mapping, and High-Level Tasks | [
"Yanmin Wu",
"Yunzhou Zhang",
"Delong Zhu",
"Zhiqiang Deng",
"Wenkai Sun",
"Xin Chen",
"Jian Zhang"
] | Object SLAM is considered increasingly significant for robot high-level
perception and decision-making. Existing studies fall short in terms of data
association, object representation, and semantic mapping and frequently rely on
additional assumptions, limiting their performance. In this paper, we present a
comprehensive object SLAM framework that focuses on object-based perception and
object-oriented robot tasks. First, we propose an ensemble data association
approach for associating objects in complicated conditions by incorporating
parametric and nonparametric statistic testing. In addition, we suggest an
outlier-robust centroid and scale estimation algorithm for modeling objects
based on the iForest and line alignment. Then a lightweight and object-oriented
map is represented by estimated general object models. Taking into
consideration the semantic invariance of objects, we convert the object map to
a topological map to provide semantic descriptors to enable multi-map matching.
Finally, we suggest an object-driven active exploration strategy to achieve
autonomous mapping in the grasping scenario. A range of public datasets and
real-world results in mapping, augmented reality, scene matching,
relocalization, and robotic manipulation have been used to evaluate the
proposed object SLAM framework for its efficient performance. | [
"cs.RO",
"cs.CV"
] | false |
2305.07514 | 2023-05-12T14:30:07Z | BlendFields: Few-Shot Example-Driven Facial Modeling | [
"Kacper Kania",
"Stephan J. Garbin",
"Andrea Tagliasacchi",
"Virginia Estellers",
"Kwang Moo Yi",
"Julien Valentin",
"Tomasz Trzciński",
"Marek Kowalski"
] | Generating faithful visualizations of human faces requires capturing both
coarse and fine-level details of the face geometry and appearance. Existing
methods are either data-driven, requiring an extensive corpus of data not
publicly accessible to the research community, or fail to capture fine details
because they rely on geometric face models that cannot represent fine-grained
details in texture with a mesh discretization and linear deformation designed
to model only a coarse face geometry. We introduce a method that bridges this
gap by drawing inspiration from traditional computer graphics techniques.
Unseen expressions are modeled by blending appearance from a sparse set of
extreme poses. This blending is performed by measuring local volumetric changes
in those expressions and locally reproducing their appearance whenever a
similar expression is performed at test time. We show that our method
generalizes to unseen expressions, adding fine-grained effects on top of smooth
volumetric deformations of a face, and demonstrate how it generalizes beyond
faces. | [
"cs.CV",
"cs.GR"
] | true |
2305.07528 | 2023-05-12T14:42:47Z | WEDGE: A multi-weather autonomous driving dataset built from generative
vision-language models | [
"Aboli Marathe",
"Deva Ramanan",
"Rahee Walambe",
"Ketan Kotecha"
] | The open road poses many challenges to autonomous perception, including poor
visibility from extreme weather conditions. Models trained on good-weather
datasets frequently fail at detection in these out-of-distribution settings. To
aid adversarial robustness in perception, we introduce WEDGE (WEather images by
DALL-E GEneration): a synthetic dataset generated with a vision-language
generative model via prompting. WEDGE consists of 3360 images in 16 extreme
weather conditions manually annotated with 16513 bounding boxes, supporting
research in the tasks of weather classification and 2D object detection. We
have analyzed WEDGE from research standpoints, verifying its effectiveness for
extreme-weather autonomous perception. We establish baseline performance for
classification and detection with 53.87% test accuracy and 45.41 mAP. Most
importantly, WEDGE can be used to fine-tune state-of-the-art detectors,
improving SOTA performance on real-world weather benchmarks (such as DAWN) by
4.48 AP for well-generated classes like trucks. WEDGE has been collected under
OpenAI's terms of use and is released for public use under the CC BY-NC-SA 4.0
license. The repository for this work and dataset is available at
https://infernolia.github.io/WEDGE. | [
"cs.CV",
"cs.AI"
] | false |
2305.07558 | 2023-05-12T15:34:20Z | Measuring Progress in Fine-grained Vision-and-Language Understanding | [
"Emanuele Bugliarello",
"Laurent Sartran",
"Aishwarya Agrawal",
"Lisa Anne Hendricks",
"Aida Nematzadeh"
] | While pretraining on large-scale image-text data from the Web has facilitated
rapid progress on many vision-and-language (V&L) tasks, recent work has
demonstrated that pretrained models lack "fine-grained" understanding, such as
the ability to recognise relationships, verbs, and numbers in images. This has
resulted in an increased interest in the community to either develop new
benchmarks or models for such capabilities. To better understand and quantify
progress in this direction, we investigate four competitive V&L models on four
fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al.,
2022) consistently outperforms other baselines, and that modelling innovations
can impact performance more than scaling Web data, which even degrades
performance sometimes. Through a deeper investigation of X-VLM, we highlight
the importance of both novel losses and rich data sources for learning
fine-grained skills. Finally, we inspect training dynamics, and discover that
for some tasks, performance peaks early in training or significantly
fluctuates, never converging. | [
"cs.CL",
"cs.CV"
] | true |
2305.07639 | 2023-05-12T17:48:05Z | Efficient Neural Network based Classification and Outlier Detection for
Image Moderation using Compressed Sensing and Group Testing | [
"Sabyasachi Ghosh",
"Sanyam Saxena",
"Ajit Rajwade"
] | Popular social media platforms employ neural network based image moderation
engines to classify images uploaded on them as having potentially objectionable
content. Such moderation engines must answer a large number of queries with
heavy computational cost, even though the actual number of images with
objectionable content is usually a tiny fraction. Inspired by recent work on
Neural Group Testing, we propose an approach which exploits this fact to reduce
the overall computational cost of such engines using the technique of
Compressed Sensing (CS). We present the quantitative matrix-pooled neural
network (QMPNN), which takes as input $n$ images, and a $m \times n$ binary
pooling matrix with $m < n$, whose rows indicate $m$ pools of images i.e.
selections of $r$ images out of $n$. The QMPNN efficiently outputs the product
of this matrix with the unknown sparse binary vector indicating whether each
image is objectionable or not, i.e. it outputs the number of objectionable
images in each pool. For suitable matrices, this is decoded using CS decoding
algorithms to predict which images were objectionable. The computational cost
of running the QMPNN and the CS algorithms is significantly lower than the cost
of using a neural network with the same number of parameters separately on each
image to classify the images, which we demonstrate via extensive experiments.
Our technique is inherently resilient to moderate levels of errors in the
prediction from the QMPNN. Furthermore, we present pooled deep outlier
detection, which brings CS and group testing techniques to deep outlier
detection, to provide for the case when the objectionable images do not belong
to a set of pre-defined classes. This technique enables efficient automated
moderation of off-topic images shared on topical forums dedicated to sharing
images of a certain single class, many of which are currently human-moderated. | [
"cs.CV",
"cs.LG"
] | false |
2305.07783 | 2023-05-12T22:05:44Z | ROI-based Deep Image Compression with Swin Transformers | [
"Binglin Li",
"Jie Liang",
"Haisheng Fu",
"Jingning Han"
] | Encoding the Region Of Interest (ROI) with better quality than the background
has many applications including video conferencing systems, video surveillance
and object-oriented vision tasks. In this paper, we propose a ROI-based image
compression framework with Swin transformers as main building blocks for the
autoencoder network. The binary ROI mask is integrated into different layers of
the network to provide spatial information guidance. Based on the ROI mask, we
can control the relative importance of the ROI and non-ROI by modifying the
corresponding Lagrange multiplier $ \lambda $ for different regions.
Experimental results show our model achieves higher ROI PSNR than other methods
and modest average PSNR for human evaluation. When tested on models pre-trained
with original images, it has superior object detection and instance
segmentation performance on the COCO validation dataset. | [
"cs.CV",
"eess.IV"
] | false |
2305.11891 | 2023-05-12T09:54:21Z | THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2
Data | [
"Gabriele Meoni",
"Roberto Del Prete",
"Federico Serva",
"Alix De Beussche",
"Olivier Colin",
"Nicolas Longépé"
] | Nowadays, most of the datasets leveraging space-borne Earth Observation (EO)
data are based on high-end levels products, which are ortho-rectified,
coregistered, calibrated, and further processed to mitigate the impact of noise
and distortions. Nevertheless, given the growing interest to apply Artificial
Intelligence (AI) onboard satellites for time-critical applications, such as
natural disaster response, providing raw satellite images could be useful to
foster the research on energy-efficient pre-processing algorithms and AI models
for onboard-satellite applications. In this framework, we present THRawS, the
first dataset composed of Sentinel-2 (S-2) raw data containing warm temperature
hotspots (wildfires and volcanic eruptions). To foster the realisation of
robust AI architectures, the dataset gathers data from all over the globe.
Furthermore, we designed a custom methodology to identify events in raw data
starting from the corresponding Level-1C (L1C) products. Indeed, given the
availability of state-of-the-art algorithms for thermal anomalies detection on
the L1C tiles, we detect such events on these latter and we then re-project
them on the corresponding raw images. Additionally, to deal with unprocessed
data, we devise a lightweight coarse coregisteration and georeferencing
strategy. The developed dataset is comprehensive of more than 100 samples
containing wildfires, volcanic eruptions, and event-free volcanic areas to
enable both warm-events detection and general classification applications.
Finally, we compare performances between the proposed coarse spatial
coregistration technique and the SuperGlue Deep Neural Network method to
highlight the different constraints in terms of timing and quality of spatial
registration to minimise the spatial displacement error for a specific scene. | [
"cs.CV",
"eess.SP"
] | false |
2306.06084 | 2023-05-12T04:43:51Z | Machine Vision Using Cellphone Camera: A Comparison of deep networks for
classifying three challenging denominations of Indian Coins | [
"Keyur D. Joshi",
"Dhruv Shah",
"Varshil Shah",
"Nilay Gandhi",
"Sanket J. Shah",
"Sanket B. Shah"
] | Indian currency coins come in a variety of denominations. Off all the
varieties Rs.1, RS.2, and Rs.5 have similar diameters. Majority of the coin
styles in market circulation for denominations of Rs.1 and Rs.2 coins are
nearly the same except for numerals on its reverse side. If a coin is resting
on its obverse side, the correct denomination is not distinguishable by humans.
Therefore, it was hypothesized that a digital image of a coin resting on its
either size could be classified into its correct denomination by training a
deep neural network model. The digital images were generated by using cheap
cell phone cameras. To find the most suitable deep neural network architecture,
four were selected based on the preliminary analysis carried out for
comparison. The results confirm that two of the four deep neural network models
can classify the correct denomination from either side of a coin with an
accuracy of 97%. | [
"cs.CV",
"cs.LG"
] | false |
2305.07404 | 2023-05-12T12:05:11Z | Color Deconvolution applied to Domain Adaptation in HER2
histopathological images | [
"David Anglada-Rotger",
"Ferran Marqués",
"Montse Pardàs"
] | Breast cancer early detection is crucial for improving patient outcomes. The
Institut Catal\`a de la Salut (ICS) has launched the DigiPatICS project to
develop and implement artificial intelligence algorithms to assist with the
diagnosis of cancer. In this paper, we propose a new approach for facing the
color normalization problem in HER2-stained histopathological images of breast
cancer tissue, posed as an style transfer problem. We combine the Color
Deconvolution technique with the Pix2Pix GAN network to present a novel
approach to correct the color variations between different HER2 stain brands.
Our approach focuses on maintaining the HER2 score of the cells in the
transformed images, which is crucial for the HER2 analysis. Results demonstrate
that our final model outperforms the state-of-the-art image style transfer
methods in maintaining the cell classes in the transformed images and is as
effective as them in generating realistic images. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.07429 | 2023-05-12T12:52:14Z | Unlocking the Potential of Medical Imaging with ChatGPT's Intelligent
Diagnostics | [
"Ayyub Alzahem",
"Shahid Latif",
"Wadii Boulila",
"Anis Koubaa"
] | Medical imaging is an essential tool for diagnosing various healthcare
diseases and conditions. However, analyzing medical images is a complex and
time-consuming task that requires expertise and experience. This article aims
to design a decision support system to assist healthcare providers and patients
in making decisions about diagnosing, treating, and managing health conditions.
The proposed architecture contains three stages: 1) data collection and
labeling, 2) model training, and 3) diagnosis report generation. The key idea
is to train a deep learning model on a medical image dataset to extract four
types of information: the type of image scan, the body part, the test image,
and the results. This information is then fed into ChatGPT to generate
automatic diagnostics. The proposed system has the potential to enhance
decision-making, reduce costs, and improve the capabilities of healthcare
providers. The efficacy of the proposed system is analyzed by conducting
extensive experiments on a large medical image dataset. The experimental
outcomes exhibited promising performance for automatic diagnosis through
medical images. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.07495 | 2023-05-12T14:10:36Z | Gallery Sampling for Robust and Fast Face Identification | [
"Myung-cheol Roh",
"Pyoung-gang Lim",
"Jongju Shin"
] | Deep learning methods have been achieved brilliant results in face
recognition. One of the important tasks to improve the performance is to
collect and label images as many as possible. However, labeling identities and
checking qualities of large image data are difficult task and mistakes cannot
be avoided in processing large data. Previous works have been trying to deal
with the problem only in training domain, however it can cause much serious
problem if the mistakes are in gallery data of face identification. We proposed
gallery data sampling methods which are robust to outliers including wrong
labeled, low quality, and less-informative images and reduce searching time.
The proposed sampling-by-pruning and sampling-by-generating methods
significantly improved face identification performance on our 5.4M web image
dataset of celebrities. The proposed method achieved 0.0975 in terms of FNIR at
FPIR=0.01, while conventional method showed 0.3891. The average number of
feature vectors for each individual gallery was reduced to 17.1 from 115.9 and
it can provide much faster search. We also made experiments on public datasets
and our method achieved 0.1314 and 0.0668 FNIRs at FPIR=0.01 on the
CASIA-WebFace and MS1MV2, while the convectional method did 0.5446, and 0.1327,
respectively. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.07552 | 2023-05-12T15:25:58Z | Dish detection in food platters: A framework for automated diet logging
and nutrition management | [
"Mansi Goel",
"Shashank Dargar",
"Shounak Ghatak",
"Nidhi Verma",
"Pratik Chauhan",
"Anushka Gupta",
"Nikhila Vishnumolakala",
"Hareesh Amuru",
"Ekta Gambhir",
"Ronak Chhajed",
"Meenal Jain",
"Astha Jain",
"Samiksha Garg",
"Nitesh Narwade",
"Nikhilesh Verhwani",
"Abhuday Tiwari",
"Kirti Vashishtha",
"Ganesh Bagler"
] | Diet is central to the epidemic of lifestyle disorders. Accurate and
effortless diet logging is one of the significant bottlenecks for effective
diet management and calorie restriction. Dish detection from food platters is a
challenging problem due to a visually complex food layout. We present an
end-to-end computational framework for diet management, from data compilation,
annotation, and state-of-the-art model identification to its mobile app
implementation. As a case study, we implement the framework in the context of
Indian food platters known for their complex presentation that poses a
challenge for the automated detection of dishes. Starting with the 61 most
popular Indian dishes, we identify the state-of-the-art model through a
comparative analysis of deep-learning-based object detection architectures.
Rooted in a meticulous compilation of 68,005 platter images with 134,814 manual
dish annotations, we first compare ten architectures for multi-label
classification to identify ResNet152 (mAP=84.51%) as the best model. YOLOv8x
(mAP=87.70%) emerged as the best model architecture for dish detection among
the eight deep-learning models implemented after a thorough performance
evaluation. By comparing with the state-of-the-art model for the IndianFood10
dataset, we demonstrate the superior object detection performance of YOLOv8x
for this subset and establish Resnet152 as the best architecture for
multi-label classification. The models thus trained on richly annotated data
can be extended to include dishes from across global cuisines. The proposed
framework is demonstrated through a proof-of-concept mobile application with
diverse applications for diet logging, food recommendation systems, nutritional
interventions, and mitigation of lifestyle disorders. | [
"cs.CV",
"cs.AI",
"cs.CY",
"I.4.9; I.5.4; J.3"
] | false |
2305.07613 | 2023-05-12T17:03:18Z | Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training | [
"Siddarth Asokan",
"Chandra Sekhar Seelamantula"
] | Training Generative adversarial networks (GANs) stably is a challenging task.
The generator in GANs transform noise vectors, typically Gaussian distributed,
into realistic data such as images. In this paper, we propose a novel approach
for training GANs with images as inputs, but without enforcing any pairwise
constraints. The intuition is that images are more structured than noise, which
the generator can leverage to learn a more robust transformation. The process
can be made efficient by identifying closely related datasets, or a ``friendly
neighborhood'' of the target distribution, inspiring the moniker, Spider GAN.
To define friendly neighborhoods leveraging proximity between datasets, we
propose a new measure called the signed inception distance (SID), inspired by
the polyharmonic kernel. We show that the Spider GAN formulation results in
faster convergence, as the generator can discover correspondence even between
seemingly unrelated datasets, for instance, between Tiny-ImageNet and CelebA
faces. Further, we demonstrate cascading Spider GAN, where the output
distribution from a pre-trained GAN generator is used as the input to the
subsequent network. Effectively, transporting one distribution to another in a
cascaded fashion until the target is learnt -- a new flavor of transfer
learning. We demonstrate the efficacy of the Spider approach on DCGAN,
conditional GAN, PGGAN, StyleGAN2 and StyleGAN3. The proposed approach achieves
state-of-the-art Frechet inception distance (FID) values, with one-fifth of the
training iterations, in comparison to their baseline counterparts on
high-resolution small datasets such as MetFaces, Ukiyo-E Faces and AFHQ-Cats. | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2305.07625 | 2023-05-12T17:25:19Z | Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn | [
"Ondrej Bohdal",
"Yinbing Tian",
"Yongshuo Zong",
"Ruchika Chavhan",
"Da Li",
"Henry Gouk",
"Li Guo",
"Timothy Hospedales"
] | Meta-learning and other approaches to few-shot learning are widely studied
for image recognition, and are increasingly applied to other vision tasks such
as pose estimation and dense prediction. This naturally raises the question of
whether there is any few-shot meta-learning algorithm capable of generalizing
across these diverse task types? To support the community in answering this
question, we introduce Meta Omnium, a dataset-of-datasets spanning multiple
vision tasks including recognition, keypoint localization, semantic
segmentation and regression. We experiment with popular few-shot meta-learning
baselines and analyze their ability to generalize across tasks and to transfer
knowledge between them. Meta Omnium enables meta-learning researchers to
evaluate model generalization to a much wider array of tasks than previously
possible, and provides a single framework for evaluating meta-learners across a
wide suite of vision applications in a consistent manner. | [
"cs.CV",
"cs.LG",
"stat.ML"
] | false |
2305.07642 | 2023-05-12T17:52:36Z | The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023:
Intracranial Meningioma | [
"Dominic LaBella",
"Maruf Adewole",
"Michelle Alonso-Basanta",
"Talissa Altes",
"Syed Muhammad Anwar",
"Ujjwal Baid",
"Timothy Bergquist",
"Radhika Bhalerao",
"Sully Chen",
"Verena Chung",
"Gian-Marco Conte",
"Farouk Dako",
"James Eddy",
"Ivan Ezhov",
"Devon Godfrey",
"Fathi Hilal",
"Ariana Familiar",
"Keyvan Farahani",
"Juan Eugenio Iglesias",
"Zhifan Jiang",
"Elaine Johanson",
"Anahita Fathi Kazerooni",
"Collin Kent",
"John Kirkpatrick",
"Florian Kofler",
"Koen Van Leemput",
"Hongwei Bran Li",
"Xinyang Liu",
"Aria Mahtabfar",
"Shan McBurney-Lin",
"Ryan McLean",
"Zeke Meier",
"Ahmed W Moawad",
"John Mongan",
"Pierre Nedelec",
"Maxence Pajot",
"Marie Piraud",
"Arif Rashid",
"Zachary Reitman",
"Russell Takeshi Shinohara",
"Yury Velichko",
"Chunhao Wang",
"Pranav Warman",
"Walter Wiggins",
"Mariam Aboian",
"Jake Albrecht",
"Udunna Anazodo",
"Spyridon Bakas",
"Adam Flanders",
"Anastasia Janas",
"Goldey Khanna",
"Marius George Linguraru",
"Bjoern Menze",
"Ayman Nada",
"Andreas M Rauschecker",
"Jeff Rudie",
"Nourel Hoda Tahon",
"Javier Villanueva-Meyer",
"Benedikt Wiestler",
"Evan Calabrese"
] | Meningiomas are the most common primary intracranial tumor in adults and can
be associated with significant morbidity and mortality. Radiologists,
neurosurgeons, neuro-oncologists, and radiation oncologists rely on
multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal
treatment monitoring; yet automated, objective, and quantitative tools for
non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS
meningioma 2023 challenge will provide a community standard and benchmark for
state-of-the-art automated intracranial meningioma segmentation models based on
the largest expert annotated multilabel meningioma mpMRI dataset to date.
Challenge competitors will develop automated segmentation models to predict
three distinct meningioma sub-regions on MRI including enhancing tumor,
non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity.
Models will be evaluated on separate validation and held-out test datasets
using standardized metrics utilized across the BraTS 2023 series of challenges
including the Dice similarity coefficient and Hausdorff distance. The models
developed during the course of this challenge will aid in incorporation of
automated meningioma MRI segmentation into clinical practice, which will
ultimately improve care of patients with meningioma. | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2305.07790 | 2023-05-12T22:49:36Z | Automated Grain Boundary (GB) Segmentation and Microstructural Analysis
in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy | [
"Shoieb Ahmed Chowdhury",
"M. F. N. Taufique",
"Jing Wang",
"Marissa Masden",
"Madison Wenzlick",
"Ram Devanathan",
"Alan L Schemer-Kohrn",
"Keerti S Kappagantula"
] | Austenitic 347H stainless steel offers superior mechanical properties and
corrosion resistance required for extreme operating conditions such as high
temperature. The change in microstructure due to composition and process
variations is expected to impact material properties. Identifying
microstructural features such as grain boundaries thus becomes an important
task in the process-microstructure-properties loop. Applying convolutional
neural network (CNN) based deep-learning models is a powerful technique to
detect features from material micrographs in an automated manner. Manual
labeling of the images for the segmentation task poses a major bottleneck for
generating training data and labels in a reliable and reproducible way within a
reasonable timeframe. In this study, we attempt to overcome such limitations by
utilizing multi-modal microscopy to generate labels directly instead of manual
labeling. We combine scanning electron microscopy (SEM) images of 347H
stainless steel as training data and electron backscatter diffraction (EBSD)
micrographs as pixel-wise labels for grain boundary detection as a semantic
segmentation task. We demonstrate that despite producing instrumentation drift
during data collection between two modes of microscopy, this method performs
comparably to similar segmentation tasks that used manual labeling.
Additionally, we find that na\"ive pixel-wise segmentation results in small
gaps and missing boundaries in the predicted grain boundary map. By
incorporating topological information during model training, the connectivity
of the grain boundary network and segmentation performance is improved.
Finally, our approach is validated by accurate computation on downstream tasks
of predicting the underlying grain morphology distributions which are the
ultimate quantities of interest for microstructural characterization. | [
"cond-mat.mtrl-sci",
"cs.CV",
"eess.IV"
] | false |
2305.10438 | 2023-05-12T05:34:52Z | IMAGINATOR: Pre-Trained Image+Text Joint Embeddings using Word-Level
Grounding of Images | [
"Varuna Krishna",
"S Suryavardan",
"Shreyash Mishra",
"Sathyanarayanan Ramamoorthy",
"Parth Patwa",
"Megha Chakraborty",
"Aman Chadha",
"Amitava Das",
"Amit Sheth"
] | Word embeddings, i.e., semantically meaningful vector representation of
words, are largely influenced by the distributional hypothesis "You shall know
a word by the company it keeps" (Harris, 1954), whereas modern prediction-based
neural network embeddings rely on design choices and hyperparameter
optimization. Word embeddings like Word2Vec, GloVe etc. well capture the
contextuality and real-world analogies but contemporary convolution-based image
embeddings such as VGGNet, AlexNet, etc. do not capture contextual knowledge.
The popular king-queen analogy does not hold true for most commonly used vision
embeddings.
In this paper, we introduce a pre-trained joint embedding (JE), named
IMAGINATOR, trained on 21K distinct image objects level from 1M image+text
pairs. JE is a way to encode multimodal data into a vector space where the text
modality serves as the ground-ing key, which the complementary modality (in
this case, the image) is anchored with. IMAGINATOR encapsulates three
individual representations: (i) object-object co-location, (ii) word-object
co-location, and (iii) word-object correlation. These three ways capture
complementary aspects of the two modalities which are further combined to
obtain the final JEs.
Generated JEs are intrinsically evaluated to assess how well they capture the
contextuality and real-world analogies. We also evaluate pre-trained IMAGINATOR
JEs on three downstream tasks: (i) image captioning, (ii) Image2Tweet, and
(iii) text-based image retrieval. IMAGINATOR establishes a new standard on the
aforementioned down-stream tasks by outperforming the current SoTA on all the
selected tasks. IMAGINATOR will be made publicly available. The codes are
available at https://github.com/varunakk/IMAGINATOR | [
"cs.CL",
"cs.AI",
"cs.CV",
"cs.MM"
] | false |
2305.07280 | 2023-05-12T06:51:05Z | Harvesting Event Schemas from Large Language Models | [
"Jialong Tang",
"Hongyu Lin",
"Zhuoqun Li",
"Yaojie Lu",
"Xianpei Han",
"Le Sun"
] | Event schema provides a conceptual, structural and formal language to
represent events and model the world event knowledge. Unfortunately, it is
challenging to automatically induce high-quality and high-coverage event
schemas due to the open nature of real-world events, the diversity of event
expressions, and the sparsity of event knowledge. In this paper, we propose a
new paradigm for event schema induction -- knowledge harvesting from
large-scale pre-trained language models, which can effectively resolve the
above challenges by discovering, conceptualizing and structuralizing event
schemas from PLMs. And an Event Schema Harvester (ESHer) is designed to
automatically induce high-quality event schemas via in-context generation-based
conceptualization, confidence-aware schema structuralization and graph-based
schema aggregation. Empirical results show that ESHer can induce high-quality
and high-coverage event schemas on varying domains. | [
"cs.CL"
] | false |
2305.07288 | 2023-05-12T07:24:16Z | Open-WikiTable: Dataset for Open Domain Question Answering with Complex
Reasoning over Table | [
"Sunjun Kweon",
"Yeonsu Kwon",
"Seonhee Cho",
"Yohan Jo",
"Edward Choi"
] | Despite recent interest in open domain question answering (ODQA) over tables,
many studies still rely on datasets that are not truly optimal for the task
with respect to utilizing structural nature of table. These datasets assume
answers reside as a single cell value and do not necessitate exploring over
multiple cells such as aggregation, comparison, and sorting. Thus, we release
Open-WikiTable, the first ODQA dataset that requires complex reasoning over
tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be
applicable in the open-domain setting. As each question is coupled with both
textual answers and SQL queries, Open-WikiTable opens up a wide range of
possibilities for future research, as both reader and parser methods can be
applied. The dataset and code are publicly available. | [
"cs.CL"
] | false |
2305.07289 | 2023-05-12T07:32:00Z | RepCL: Exploring Effective Representation for Continual Text
Classification | [
"Yifan Song",
"Peiyi Wang",
"Dawei Zhu",
"Tianyu Liu",
"Zhifang Sui",
"Sujian Li"
] | Continual learning (CL) aims to constantly learn new knowledge over time
while avoiding catastrophic forgetting on old tasks. In this work, we focus on
continual text classification under the class-incremental setting. Recent CL
studies find that the representations learned in one task may not be effective
for other tasks, namely representation bias problem. For the first time we
formally analyze representation bias from an information bottleneck perspective
and suggest that exploiting representations with more class-relevant
information could alleviate the bias. To this end, we propose a novel
replay-based continual text classification method, RepCL. Our approach utilizes
contrastive and generative representation learning objectives to capture more
class-relevant features. In addition, RepCL introduces an adversarial replay
strategy to alleviate the overfitting problem of replay. Experiments
demonstrate that RepCL effectively alleviates forgetting and achieves
state-of-the-art performance on three text classification tasks. | [
"cs.CL"
] | false |
2305.07340 | 2023-05-12T09:37:13Z | MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large
Language Models in Medicine | [
"Jie Xu",
"Lu Lu",
"Sen Yang",
"Bilin Liang",
"Xinwei Peng",
"Jiali Pang",
"Jinru Ding",
"Xiaoming Shi",
"Lingrui Yang",
"Huan Song",
"Kang Li",
"Xin Sun",
"Shaoting Zhang"
] | METHODS: First, a set of evaluation criteria is designed based on a
comprehensive literature review. Second, existing candidate criteria are
optimized for using a Delphi method by five experts in medicine and
engineering. Third, three clinical experts design a set of medical datasets to
interact with LLMs. Finally, benchmarking experiments are conducted on the
datasets. The responses generated by chatbots based on LLMs are recorded for
blind evaluations by five licensed medical experts. RESULTS: The obtained
evaluation criteria cover medical professional capabilities, social
comprehensive capabilities, contextual capabilities, and computational
robustness, with sixteen detailed indicators. The medical datasets include
twenty-seven medical dialogues and seven case reports in Chinese. Three
chatbots are evaluated, ChatGPT by OpenAI, ERNIE Bot by Baidu Inc., and Doctor
PuJiang (Dr. PJ) by Shanghai Artificial Intelligence Laboratory. Experimental
results show that Dr. PJ outperforms ChatGPT and ERNIE Bot in both
multiple-turn medical dialogue and case report scenarios. | [
"cs.CL"
] | false |
2305.07475 | 2023-05-12T13:44:40Z | Comprehensive Solution Program Centric Pretraining for Table-and-Text
Hybrid Numerical Reasoning | [
"Qianying Liu",
"Dongsheng Yang",
"Wenjie Zhong",
"Fei Cheng",
"Sadao Kurohashi"
] | Numerical reasoning over table-and-text hybrid passages, such as financial
reports, poses significant challenges and has numerous potential applications.
Noise and irrelevant variables in the model input have been a hindrance to its
performance. Additionally, coarse-grained supervision of the whole solution
program has impeded the model's ability to learn the underlying numerical
reasoning process. In this paper, we propose three pretraining tasks that
operate at both the whole program and sub-program level: Variable Integrity
Ranking, which guides the model to focus on useful variables; Variable Operator
Prediction, which decomposes the supervision into fine-grained single operator
prediction; and Variable Keyphrase Masking, which encourages the model to
identify key evidence that sub-programs are derived from. Experimental results
demonstrate the effectiveness of our proposed methods, surpassing
transformer-based model baselines. | [
"cs.CL"
] | false |
2305.07491 | 2023-05-12T14:05:45Z | A Comprehensive Analysis of Adapter Efficiency | [
"Nandini Mundra",
"Sumanth Doddapaneni",
"Raj Dabre",
"Anoop Kunchukuttan",
"Ratish Puduppully",
"Mitesh M. Khapra"
] | Adapters have been positioned as a parameter-efficient fine-tuning (PEFT)
approach, whereby a minimal number of parameters are added to the model and
fine-tuned. However, adapters have not been sufficiently analyzed to understand
if PEFT translates to benefits in training/deployment efficiency and
maintainability/extensibility. Through extensive experiments on many adapters,
tasks, and languages in supervised and cross-lingual zero-shot settings, we
clearly show that for Natural Language Understanding (NLU) tasks, the parameter
efficiency in adapters does not translate to efficiency gains compared to full
fine-tuning of models. More precisely, adapters are relatively expensive to
train and have slightly higher deployment latency. Furthermore, the
maintainability/extensibility benefits of adapters can be achieved with simpler
approaches like multi-task training via full fine-tuning, which also provide
relatively faster training times. We, therefore, recommend that for moderately
sized models for NLU tasks, practitioners should rely on full fine-tuning or
multi-task training rather than using adapters. Our code is available at
https://github.com/AI4Bharat/adapter-efficiency. | [
"cs.CL"
] | false |
2305.07615 | 2023-05-12T17:08:47Z | What are the Desired Characteristics of Calibration Sets? Identifying
Correlates on Long Form Scientific Summarization | [
"Griffin Adams",
"Bichlien H Nguyen",
"Jake Smith",
"Yingce Xia",
"Shufang Xie",
"Anna Ostropolets",
"Budhaditya Deb",
"Yuan-Jyue Chen",
"Tristan Naumann",
"Noémie Elhadad"
] | Summarization models often generate text that is poorly calibrated to quality
metrics because they are trained to maximize the likelihood of a single
reference (MLE). To address this, recent work has added a calibration step,
which exposes a model to its own ranked outputs to improve relevance or, in a
separate line of work, contrasts positive and negative sets to improve
faithfulness. While effective, much of this work has focused on how to generate
and optimize these sets. Less is known about why one setup is more effective
than another. In this work, we uncover the underlying characteristics of
effective sets. For each training instance, we form a large, diverse pool of
candidates and systematically vary the subsets used for calibration
fine-tuning. Each selection strategy targets distinct aspects of the sets, such
as lexical diversity or the size of the gap between positive and negatives. On
three diverse scientific long-form summarization datasets (spanning biomedical,
clinical, and chemical domains), we find, among others, that faithfulness
calibration is optimal when the negative sets are extractive and more likely to
be generated, whereas for relevance calibration, the metric margin between
candidates should be maximized and surprise--the disagreement between model and
metric defined candidate rankings--minimized. Code to create, select, and
optimize calibration sets is available at
https://github.com/griff4692/calibrating-summaries | [
"cs.CL"
] | true |
2305.07717 | 2023-05-12T18:16:45Z | Parallel Tree Kernel Computation | [
"Souad Taouti",
"Hadda Cherroun",
"Djelloul Ziadi"
] | Tree kernels are fundamental tools that have been leveraged in many
applications, particularly those based on machine learning for Natural Language
Processing tasks. In this paper, we devise a parallel implementation of the
sequential algorithm for the computation of some tree kernels of two finite
sets of trees (Ouali-Sebti, 2015). Our comparison is narrowed on a sequential
implementation of SubTree kernel computation. This latter is mainly reduced to
an intersection of weighted tree automata. Our approach relies on the nature of
the data parallelism source inherent in this computation by deploying the
MapReduce paradigm. One of the key benefits of our approach is its versatility
in being adaptable to a wide range of substructure tree kernel-based learning
methods. To evaluate the efficacy of our parallel approach, we conducted a
series of experiments that compared it against the sequential version using a
diverse set of synthetic tree language datasets that were manually crafted for
our analysis. The reached results clearly demonstrate that the proposed
parallel algorithm outperforms the sequential one in terms of latency. | [
"cs.CL"
] | false |
2305.07266 | 2023-05-12T05:55:34Z | Gaussian Prior Reinforcement Learning for Nested Named Entity
Recognition | [
"Yawen Yang",
"Xuming Hu",
"Fukun Ma",
"Shu'ang Li",
"Aiwei Liu",
"Lijie Wen",
"Philip S. Yu"
] | Named Entity Recognition (NER) is a well and widely studied task in natural
language processing. Recently, the nested NER has attracted more attention
since its practicality and difficulty. Existing works for nested NER ignore the
recognition order and boundary position relation of nested entities. To address
these issues, we propose a novel seq2seq model named GPRL, which formulates the
nested NER task as an entity triplet sequence generation process. GPRL adopts
the reinforcement learning method to generate entity triplets decoupling the
entity order in gold labels and expects to learn a reasonable recognition order
of entities via trial and error. Based on statistics of boundary distance for
nested entities, GPRL designs a Gaussian prior to represent the boundary
distance distribution between nested entities and adjust the output probability
distribution of nested boundary tokens. Experiments on three nested NER
datasets demonstrate that GPRL outperforms previous nested NER models. | [
"cs.CL",
"cs.AI"
] | false |
2305.07310 | 2023-05-12T08:32:18Z | Improving Zero-shot Multilingual Neural Machine Translation by
Leveraging Cross-lingual Consistency Regularization | [
"Pengzhi Gao",
"Liwen Zhang",
"Zhongjun He",
"Hua Wu",
"Haifeng Wang"
] | The multilingual neural machine translation (NMT) model has a promising
capability of zero-shot translation, where it could directly translate between
language pairs unseen during training. For good transfer performance from
supervised directions to zero-shot directions, the multilingual NMT model is
expected to learn universal representations across different languages. This
paper introduces a cross-lingual consistency regularization, CrossConST, to
bridge the representation gap among different languages and boost zero-shot
translation performance. The theoretical analysis shows that CrossConST
implicitly maximizes the probability distribution for zero-shot translation,
and the experimental results on both low-resource and high-resource benchmarks
show that CrossConST consistently improves the translation performance. The
experimental analysis also proves that CrossConST could close the sentence
representation gap and better align the representation space. Given the
universality and simplicity of CrossConST, we believe it can serve as a strong
baseline for future multilingual NMT research. | [
"cs.CL",
"cs.AI"
] | false |
2305.07360 | 2023-05-12T10:20:13Z | Improving the Quality of Neural Machine Translation Through Proper
Translation of Name Entities | [
"Radhika Sharma",
"Pragya Katyayan",
"Nisheeth Joshi"
] | In this paper, we have shown a method of improving the quality of neural
machine translation by translating/transliterating name entities as a
preprocessing step. Through experiments we have shown the performance gain of
our system. For evaluation we considered three types of name entities viz
person names, location names and organization names. The system was able to
correctly translate mostly all the name entities. For person names the accuracy
was 99.86%, for location names the accuracy was 99.63% and for organization
names the accuracy was 99.05%. Overall, the accuracy of the system was 99.52% | [
"cs.CL",
"cs.AI"
] | false |
2305.07365 | 2023-05-12T10:29:37Z | Towards Transliteration between Sindhi Scripts from Devanagari to
Perso-Arabic | [
"Shivani Singh Rathore",
"Bharti Nathani",
"Nisheeth Joshi",
"Pragya Katyayan",
"Chander Prakash Dadlani"
] | In this paper, we have shown a script conversion (transliteration) technique
that converts Sindhi text in the Devanagari script to the Perso-Arabic script.
We showed this by incorporating a hybrid approach where some part of the text
is converted using a rule base and in case an ambiguity arises then a
probabilistic model is used to resolve the same. Using this approach, the
system achieved an overall accuracy of 99.64%. | [
"cs.CL",
"cs.AI"
] | false |
2305.07763 | 2023-05-12T21:08:35Z | Knowledge Authoring for Rules and Actions | [
"Yuheng Wang",
"Paul Fodor",
"Michael Kifer"
] | Knowledge representation and reasoning (KRR) systems describe and reason with
complex concepts and relations in the form of facts and rules. Unfortunately,
wide deployment of KRR systems runs into the problem that domain experts have
great difficulty constructing correct logical representations of their domain
knowledge. Knowledge engineers can help with this construction process, but
there is a deficit of such specialists. The earlier Knowledge Authoring Logic
Machine (KALM) based on Controlled Natural Language (CNL) was shown to have
very high accuracy for authoring facts and questions. More recently, KALMFL, a
successor of KALM, replaced CNL with factual English, which is much less
restrictive and requires very little training from users. However, KALMFL has
limitations in representing certain types of knowledge, such as authoring rules
for multi-step reasoning or understanding actions with timestamps. To address
these limitations, we propose KALMRA to enable authoring of rules and actions.
Our evaluation using the UTI guidelines benchmark shows that KALMRA achieves a
high level of correctness (100%) on rule authoring. When used for authoring and
reasoning with actions, KALMRA achieves more than 99.3% correctness on the bAbI
benchmark, demonstrating its effectiveness in more sophisticated KRR jobs.
Finally, we illustrate the logical reasoning capabilities of KALMRA by drawing
attention to the problems faced by the recently made famous AI, ChatGPT. | [
"cs.CL",
"cs.AI"
] | false |
2305.07341 | 2023-05-12T09:38:11Z | Model-based Programming: Redefining the Atomic Unit of Programming for
the Deep Learning Era | [
"Meng Zheng"
] | This paper introduces and explores a new programming paradigm, Model-based
Programming, designed to address the challenges inherent in applying deep
learning models to real-world applications. Despite recent significant
successes of deep learning models across a range of tasks, their deployment in
real business scenarios remains fraught with difficulties, such as complex
model training, large computational resource requirements, and integration
issues with existing programming languages. To ameliorate these challenges, we
propose the concept of 'Model-based Programming' and present a novel
programming language - M Language, tailored to a prospective model-centered
programming paradigm. M Language treats models as basic computational units,
enabling developers to concentrate more on crucial tasks such as model loading,
fine-tuning, evaluation, and deployment, thereby enhancing the efficiency of
creating deep learning applications. We posit that this innovative programming
paradigm will stimulate the extensive application and advancement of deep
learning technology and provide a robust foundation for a model-driven future. | [
"cs.LG",
"cs.CL",
"cs.SE"
] | false |
2305.07374 | 2023-05-12T10:52:13Z | Implications of Deep Circuits in Improving Quality of Quantum Question
Answering | [
"Pragya Katyayan",
"Nisheeth Joshi"
] | Question Answering (QA) has proved to be an arduous challenge in the area of
natural language processing (NLP) and artificial intelligence (AI). Many
attempts have been made to develop complete solutions for QA as well as
improving significant sub-modules of the QA systems to improve the overall
performance through the course of time. Questions are the most important piece
of QA, because knowing the question is equivalent to knowing what counts as an
answer (Harrah in Philos Sci, 1961 [1]). In this work, we have attempted to
understand questions in a better way by using Quantum Machine Learning (QML).
The properties of Quantum Computing (QC) have enabled classically intractable
data processing. So, in this paper, we have performed question classification
on questions from two classes of SelQA (Selection-based Question Answering)
dataset using quantum-based classifier algorithms-quantum support vector
machine (QSVM) and variational quantum classifier (VQC) from Qiskit (Quantum
Information Science toolKIT) for Python. We perform classification with both
classifiers in almost similar environments and study the effects of circuit
depths while comparing the results of both classifiers. We also use these
classification results with our own rule-based QA system and observe
significant performance improvement. Hence, this experiment has helped in
improving the quality of QA in general. | [
"cs.CL",
"cs.AI",
"quant-ph"
] | false |
2305.07378 | 2023-05-12T11:09:49Z | Surfacing Biases in Large Language Models using Contrastive Input
Decoding | [
"Gal Yona",
"Or Honovich",
"Itay Laish",
"Roee Aharoni"
] | Ensuring that large language models (LMs) are fair, robust and useful
requires an understanding of how different modifications to their inputs impact
the model's behaviour. In the context of open-text generation tasks, however,
such an evaluation is not trivial. For example, when introducing a model with
an input text and a perturbed, "contrastive" version of it, meaningful
differences in the next-token predictions may not be revealed with standard
decoding strategies. With this motivation in mind, we propose Contrastive Input
Decoding (CID): a decoding algorithm to generate text given two inputs, where
the generated text is likely given one input but unlikely given the other. In
this way, the contrastive generations can highlight potentially subtle
differences in how the LM output differs for the two inputs in a simple and
interpretable manner. We use CID to highlight context-specific biases that are
hard to detect with standard decoding strategies and quantify the effect of
different input perturbations. | [
"cs.CL",
"cs.CY",
"cs.LG"
] | true |
2305.07389 | 2023-05-12T11:29:13Z | Investigating the Sensitivity of Automatic Speech Recognition Systems to
Phonetic Variation in L2 Englishes | [
"Emma O'Neill",
"Julie Carson-Berndsen"
] | Automatic Speech Recognition (ASR) systems exhibit the best performance on
speech that is similar to that on which it was trained. As such,
underrepresented varieties including regional dialects, minority-speakers, and
low-resource languages, see much higher word error rates (WERs) than those
varieties seen as 'prestigious', 'mainstream', or 'standard'. This can act as a
barrier to incorporating ASR technology into the annotation process for
large-scale linguistic research since the manual correction of the erroneous
automated transcripts can be just as time and resource consuming as manual
transcriptions. A deeper understanding of the behaviour of an ASR system is
thus beneficial from a speech technology standpoint, in terms of improving ASR
accuracy, and from an annotation standpoint, where knowing the likely errors
made by an ASR system can aid in this manual correction. This work demonstrates
a method of probing an ASR system to discover how it handles phonetic variation
across a number of L2 Englishes. Specifically, how particular phonetic
realisations which were rare or absent in the system's training data can lead
to phoneme level misrecognitions and contribute to higher WERs. It is
demonstrated that the behaviour of the ASR is systematic and consistent across
speakers with similar spoken varieties (in this case the same L1) and phoneme
substitution errors are typically in agreement with human annotators. By
identifying problematic productions specific weaknesses can be addressed by
sourcing such realisations for training and fine-tuning thus making the system
more robust to pronunciation variation. | [
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.07406 | 2023-05-12T12:13:27Z | Two-in-One: A Model Hijacking Attack Against Text Generation Models | [
"Wai Man Si",
"Michael Backes",
"Yang Zhang",
"Ahmed Salem"
] | Machine learning has progressed significantly in various applications ranging
from face recognition to text generation. However, its success has been
accompanied by different attacks. Recently a new attack has been proposed which
raises both accountability and parasitic computing risks, namely the model
hijacking attack. Nevertheless, this attack has only focused on image
classification tasks. In this work, we broaden the scope of this attack to
include text generation and classification models, hence showing its broader
applicability. More concretely, we propose a new model hijacking attack, Ditto,
that can hijack different text classification tasks into multiple generation
ones, e.g., language translation, text summarization, and language modeling. We
use a range of text benchmark datasets such as SST-2, TweetEval, AGnews, QNLI,
and IMDB to evaluate the performance of our attacks. Our results show that by
using Ditto, an adversary can successfully hijack text generation models
without jeopardizing their utility. | [
"cs.CR",
"cs.CL",
"cs.LG"
] | false |
2305.07455 | 2023-05-12T13:07:51Z | Improving Cascaded Unsupervised Speech Translation with Denoising
Back-translation | [
"Yu-Kuan Fu",
"Liang-Hsuan Tseng",
"Jiatong Shi",
"Chen-An Li",
"Tsu-Yuan Hsu",
"Shinji Watanabe",
"Hung-yi Lee"
] | Most of the speech translation models heavily rely on parallel data, which is
hard to collect especially for low-resource languages. To tackle this issue, we
propose to build a cascaded speech translation system without leveraging any
kind of paired data. We use fully unpaired data to train our unsupervised
systems and evaluate our results on CoVoST 2 and CVSS. The results show that
our work is comparable with some other early supervised methods in some
language pairs. While cascaded systems always suffer from severe error
propagation problems, we proposed denoising back-translation (DBT), a novel
approach to building robust unsupervised neural machine translation (UNMT). DBT
successfully increases the BLEU score by 0.7--0.9 in all three translation
directions. Moreover, we simplified the pipeline of our cascaded system to
reduce inference latency and conducted a comprehensive analysis of every part
of our work. We also demonstrate our unsupervised speech translation results on
the established website. | [
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
2305.07565 | 2023-05-12T15:46:36Z | A Memory Model for Question Answering from Streaming Data Supported by
Rehearsal and Anticipation of Coreference Information | [
"Vladimir Araujo",
"Alvaro Soto",
"Marie-Francine Moens"
] | Existing question answering methods often assume that the input content
(e.g., documents or videos) is always accessible to solve the task.
Alternatively, memory networks were introduced to mimic the human process of
incremental comprehension and compression of the information in a
fixed-capacity memory. However, these models only learn how to maintain memory
by backpropagating errors in the answers through the entire network. Instead,
it has been suggested that humans have effective mechanisms to boost their
memorization capacities, such as rehearsal and anticipation. Drawing
inspiration from these, we propose a memory model that performs rehearsal and
anticipation while processing inputs to memorize important information for
solving question answering tasks from streaming data. The proposed mechanisms
are applied self-supervised during training through masked modeling tasks
focused on coreference information. We validate our model on a short-sequence
(bAbI) dataset as well as large-sequence textual (NarrativeQA) and video
(ActivityNet-QA) question answering datasets, where it achieves substantial
improvements over previous memory network approaches. Furthermore, our ablation
study confirms the proposed mechanisms' importance for memory models. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.07709 | 2023-05-12T18:07:00Z | Using Language Models to Detect Alarming Student Responses | [
"Christopher M. Ormerod",
"Milan Patel",
"Harry Wang"
] | This article details the advances made to a system that uses artificial
intelligence to identify alarming student responses. This system is built into
our assessment platform to assess whether a student's response indicates they
are a threat to themselves or others. Such responses may include details
concerning threats of violence, severe depression, suicide risks, and
descriptions of abuse. Driven by advances in natural language processing, the
latest model is a fine-tuned language model trained on a large corpus
consisting of student responses and supplementary texts. We demonstrate that
the use of a language model delivers a substantial improvement in accuracy over
the previous iterations of this system. | [
"cs.CL",
"cs.IR",
"cs.LG"
] | false |
2305.18304 | 2023-05-12T09:19:30Z | Semantic-aware Digital Twin for Metaverse: A Comprehensive Review | [
"Senthil Kumar Jagatheesaperumal",
"Zhaohui Yang",
"Qianqian Yang",
"Chongwen Huang",
"Wei Xu",
"Mohammad Shikh-Bahaei",
"Zhaoyang Zhang"
] | To facilitate the deployment of digital twins in Metaverse, the paradigm with
semantic awareness has been proposed as a means for enabling accurate and
task-oriented information extraction with inherent intelligence. However, this
framework requires all devices in the Metaverse environment to be directly
linked with the semantic model to enable faithful interpretation of messages.
In contrast, this article introduces the digital twin framework, considering a
smart industrial application, which enables semantic communication in
conjugation with the Metaverse enabling technologies. The fundamentals of this
framework are demonstrated on an industrial shopfloor management use case with
a digital twin so as to improve its performance through semantic communication.
An overview of semantic communication, Metaverse, and digital twins is
presented. Integration of these technologies with the basic architecture as
well as the impact on future industrial applications is presented. In a
nutshell, this article showcases how semantic awareness can be an effective
candidate in the implementation of digital twins for Metaverse applications. | [
"cs.CY",
"cs.CL",
"cs.IR",
"cs.MM",
"A.1; H.5; I.6; J.7; F.4"
] | false |
2305.07213 | 2023-05-12T03:01:41Z | Rethinking k-means from manifold learning perspective | [
"Quanxue Gao",
"Qianqian Wang",
"Han Lu",
"Wei Xia",
"Xinbo Gao"
] | Although numerous clustering algorithms have been developed, many existing
methods still leverage k-means technique to detect clusters of data points.
However, the performance of k-means heavily depends on the estimation of
centers of clusters, which is very difficult to achieve an optimal solution.
Another major drawback is that it is sensitive to noise and outlier data. In
this paper, from manifold learning perspective, we rethink k-means and present
a new clustering algorithm which directly detects clusters of data without mean
estimation. Specifically, we construct distance matrix between data points by
Butterworth filter such that distance between any two data points in the same
clusters equals to a small constant, while increasing the distance between
other data pairs from different clusters. To well exploit the complementary
information embedded in different views, we leverage the tensor Schatten p-norm
regularization on the 3rd-order tensor which consists of indicator matrices of
different views. Finally, an efficient alternating algorithm is derived to
optimize our model. The constructed sequence was proved to converge to the
stationary KKT point. Extensive experimental results indicate the superiority
of our proposed method. | [
"cs.LG"
] | false |
2305.07320 | 2023-05-12T08:49:17Z | ActUp: Analyzing and Consolidating tSNE and UMAP | [
"Andrew Draganov",
"Jakob Rødsgaard Jørgensen",
"Katrine Scheel Nellemann",
"Davide Mottin",
"Ira Assent",
"Tyrus Berry",
"Cigdem Aslay"
] | tSNE and UMAP are popular dimensionality reduction algorithms due to their
speed and interpretable low-dimensional embeddings. Despite their popularity,
however, little work has been done to study their full span of differences. We
theoretically and experimentally evaluate the space of parameters in both tSNE
and UMAP and observe that a single one -- the normalization -- is responsible
for switching between them. This, in turn, implies that a majority of the
algorithmic differences can be toggled without affecting the embeddings. We
discuss the implications this has on several theoretic claims behind UMAP, as
well as how to reconcile them with existing tSNE interpretations.
Based on our analysis, we provide a method (\ourmethod) that combines
previously incompatible techniques from tSNE and UMAP and can replicate the
results of either algorithm. This allows our method to incorporate further
improvements, such as an acceleration that obtains either method's outputs
faster than UMAP. We release improved versions of tSNE, UMAP, and \ourmethod
that are fully plug-and-play with the traditional libraries at
https://github.com/Andrew-Draganov/GiDR-DUN | [
"cs.LG"
] | false |
2305.07386 | 2023-05-12T11:27:20Z | One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering | [
"Si-Guo Fang",
"Dong Huang",
"Chang-Dong Wang",
"Jian-Huang Lai"
] | The bipartite graph structure has shown its promising ability in facilitating
the subspace clustering and spectral clustering algorithms for large-scale
datasets. To avoid the post-processing via k-means during the bipartite graph
partitioning, the constrained Laplacian rank (CLR) is often utilized for
constraining the number of connected components (i.e., clusters) in the
bipartite graph, which, however, neglects the distribution (or normalization)
of these connected components and may lead to imbalanced or even ill clusters.
Despite the significant success of normalized cut (Ncut) in general graphs, it
remains surprisingly an open problem how to enforce a one-step normalized cut
for bipartite graphs, especially with linear-time complexity. In this paper, we
first characterize a novel one-step bipartite graph cut (OBCut) criterion with
normalized constraints, and theoretically prove its equivalence to a trace
maximization problem. Then we extend this cut criterion to a scalable subspace
clustering approach, where adaptive anchor learning, bipartite graph learning,
and one-step normalized bipartite graph partitioning are simultaneously modeled
in a unified objective function, and an alternating optimization algorithm is
further designed to solve it in linear time. Experiments on a variety of
general and large-scale datasets demonstrate the effectiveness and scalability
of our approach. | [
"cs.LG"
] | false |
2305.07521 | 2023-05-12T14:35:42Z | AGFormer: Efficient Graph Representation with Anchor-Graph Transformer | [
"Bo Jiang",
"Fei Xu",
"Ziyan Zhang",
"Jin Tang",
"Feiping Nie"
] | To alleviate the local receptive issue of GCN, Transformers have been
exploited to capture the long range dependences of nodes for graph data
representation and learning. However, existing graph Transformers generally
employ regular self-attention module for all node-to-node message passing which
needs to learn the affinities/relationships between all node's pairs, leading
to high computational cost issue. Also, they are usually sensitive to graph
noises. To overcome this issue, we propose a novel graph Transformer
architecture, termed Anchor Graph Transformer (AGFormer), by leveraging an
anchor graph model. To be specific, AGFormer first obtains some representative
anchors and then converts node-to-node message passing into anchor-to-anchor
and anchor-to-node message passing process. Thus, AGFormer performs much more
efficiently and also robustly than regular node-to-node Transformers. Extensive
experiments on several benchmark datasets demonstrate the effectiveness and
benefits of proposed AGFormer. | [
"cs.LG"
] | false |
2305.07624 | 2023-05-12T17:24:02Z | Agile gesture recognition for capacitive sensing devices: adapting
on-the-job | [
"Ying Liu",
"Liucheng Guo",
"Valeri A. Makarov",
"Yuxiang Huang",
"Alexander Gorban",
"Evgeny Mirkes",
"Ivan Y. Tyukin"
] | Automated hand gesture recognition has been a focus of the AI community for
decades. Traditionally, work in this domain revolved largely around scenarios
assuming the availability of the flow of images of the user hands. This has
partly been due to the prevalence of camera-based devices and the wide
availability of image data. However, there is growing demand for gesture
recognition technology that can be implemented on low-power devices using
limited sensor data instead of high-dimensional inputs like hand images. In
this work, we demonstrate a hand gesture recognition system and method that
uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five
fingers. We use a machine learning technique to analyse the time series signals
and identify three features that can represent 5 fingers within 500 ms. The
analysis is composed of a two stage training strategy, including dimension
reduction through principal component analysis and classification with K
nearest neighbour. Remarkably, we found that this combination showed a level of
performance which was comparable to more advanced methods such as supervised
variational autoencoder. The base system can also be equipped with the
capability to learn from occasional errors by providing it with an additional
adaptive error correction mechanism. The results showed that the error
corrector improve the classification performance in the base system without
compromising its performance. The system requires no more than 1 ms of
computing time per input sample, and is smaller than deep neural networks,
demonstrating the feasibility of agile gesture recognition systems based on
this technology. | [
"cs.LG"
] | false |
2305.07741 | 2023-05-12T19:52:11Z | To transfer or not transfer: Unified transferability metric and analysis | [
"Qianshan Zhan",
"Xiao-Jun Zeng"
] | In transfer learning, transferability is one of the most fundamental
problems, which aims to evaluate the effectiveness of arbitrary transfer tasks.
Existing research focuses on classification tasks and neglects domain or task
differences. More importantly, there is a lack of research to determine whether
to transfer or not. To address these, we propose a new analytical approach and
metric, Wasserstein Distance based Joint Estimation (WDJE), for transferability
estimation and determination in a unified setting: classification and
regression problems with domain and task differences. The WDJE facilitates
decision-making on whether to transfer or not by comparing the target risk with
and without transfer. To enable the comparison, we approximate the target
transfer risk by proposing a non-symmetric, easy-to-understand and
easy-to-calculate target risk bound that is workable even with limited target
labels. The proposed bound relates the target risk to source model performance,
domain and task differences based on Wasserstein distance. We also extend our
bound into unsupervised settings and establish the generalization bound from
finite empirical samples. Our experiments in image classification and remaining
useful life regression prediction illustrate the effectiveness of the WDJE in
determining whether to transfer or not, and the proposed bound in approximating
the target transfer risk. | [
"cs.LG"
] | false |
2305.07778 | 2023-05-12T21:49:51Z | Accelerator-Aware Training for Transducer-Based Speech Recognition | [
"Suhaila M. Shakiah",
"Rupak Vignesh Swaminathan",
"Hieu Duy Nguyen",
"Raviteja Chinta",
"Tariq Afzal",
"Nathan Susanj",
"Athanasios Mouchtaris",
"Grant P. Strimel",
"Ariya Rastrow"
] | Machine learning model weights and activations are represented in
full-precision during training. This leads to performance degradation in
runtime when deployed on neural network accelerator (NNA) chips, which leverage
highly parallelized fixed-point arithmetic to improve runtime memory and
latency. In this work, we replicate the NNA operators during the training
phase, accounting for the degradation due to low-precision inference on the NNA
in back-propagation. Our proposed method efficiently emulates NNA operations,
thus foregoing the need to transfer quantization error-prone data to the
Central Processing Unit (CPU), ultimately reducing the user perceived latency
(UPL). We apply our approach to Recurrent Neural Network-Transducer (RNN-T), an
attractive architecture for on-device streaming speech recognition tasks. We
train and evaluate models on 270K hours of English data and show a 5-7%
improvement in engine latency while saving up to 10% relative degradation in
WER. | [
"cs.LG"
] | false |