arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2402.13304 | 2024-02-20T15:01:11Z | Harmful algal bloom forecasting. A comparison between stream and batch
learning | [
"Andres Molares-Ulloa",
"Elisabet Rocruz",
"Daniel Rivero",
"Xosé A. Padin",
"Rita Nolasco",
"Jesús Dubert",
"Enrique Fernandez-Blanco"
] | Diarrhetic Shellfish Poisoning (DSP) is a global health threat arising from
shellfish contaminated with toxins produced by dinoflagellates. The condition,
with its widespread incidence, high morbidity rate, and persistent shellfish
toxicity, poses risks to public health and the shellfish industry. High biomass
of toxin-producing algae such as DSP are known as Harmful Algal Blooms (HABs).
Monitoring and forecasting systems are crucial for mitigating HABs impact.
Predicting harmful algal blooms involves a time-series-based problem with a
strong historical seasonal component, however, recent anomalies due to changes
in meteorological and oceanographic events have been observed. Stream Learning
stands out as one of the most promising approaches for addressing
time-series-based problems with concept drifts. However, its efficacy in
predicting HABs remains unproven and needs to be tested in comparison with
Batch Learning. Historical data availability is a critical point in developing
predictive systems. In oceanography, the available data collection can have
some constrains and limitations, which has led to exploring new tools to obtain
more exhaustive time series. In this study, a machine learning workflow for
predicting the number of cells of a toxic dinoflagellate, Dinophysis acuminata,
was developed with several key advancements. Seven machine learning algorithms
were compared within two learning paradigms. Notably, the output data from
CROCO, the ocean hydrodynamic model, was employed as the primary dataset,
palliating the limitation of time-continuous historical data. This study
highlights the value of models interpretability, fair models comparison
methodology, and the incorporation of Stream Learning models. The model DoME,
with an average R2 of 0.77 in the 3-day-ahead prediction, emerged as the most
effective and interpretable predictor, outperforming the other algorithms. | [
"cs.LG",
"cs.AI"
] | false |
2402.13321 | 2024-02-20T19:00:59Z | Rigor with Machine Learning from Field Theory to the Poincaré
Conjecture | [
"Sergei Gukov",
"James Halverson",
"Fabian Ruehle"
] | Machine learning techniques are increasingly powerful, leading to many
breakthroughs in the natural sciences, but they are often stochastic,
error-prone, and blackbox. How, then, should they be utilized in fields such as
theoretical physics and pure mathematics that place a premium on rigor and
understanding? In this Perspective we discuss techniques for obtaining rigor in
the natural sciences with machine learning. Non-rigorous methods may lead to
rigorous results via conjecture generation or verification by reinforcement
learning. We survey applications of these techniques-for-rigor ranging from
string theory to the smooth $4$d Poincar\'e conjecture in low-dimensional
topology. One can also imagine building direct bridges between machine learning
theory and either mathematics or theoretical physics. As examples, we describe
a new approach to field theory motivated by neural network theory, and a theory
of Riemannian metric flows induced by neural network gradient descent, which
encompasses Perelman's formulation of the Ricci flow that was utilized to
resolve the $3$d Poincar\'e conjecture. | [
"hep-th",
"cs.LG"
] | false |
2402.13338 | 2024-02-20T19:30:55Z | Incentivized Exploration via Filtered Posterior Sampling | [
"Anand Kalvit",
"Aleksandrs Slivkins",
"Yonatan Gur"
] | We study "incentivized exploration" (IE) in social learning problems where
the principal (a recommendation algorithm) can leverage information asymmetry
to incentivize sequentially-arriving agents to take exploratory actions. We
identify posterior sampling, an algorithmic approach that is well known in the
multi-armed bandits literature, as a general-purpose solution for IE. In
particular, we expand the existing scope of IE in several practically-relevant
dimensions, from private agent types to informative recommendations to
correlated Bayesian priors. We obtain a general analysis of posterior sampling
in IE which allows us to subsume these extended settings as corollaries, while
also recovering existing results as special cases. | [
"cs.LG",
"econ.TH"
] | false |
2402.13366 | 2024-02-20T20:44:40Z | Statistical curriculum learning: An elimination algorithm achieving an
oracle risk | [
"Omer Cohen",
"Ron Meir",
"Nir Weinberger"
] | We consider a statistical version of curriculum learning (CL) in a parametric
prediction setting. The learner is required to estimate a target parameter
vector, and can adaptively collect samples from either the target model, or
other source models that are similar to the target model, but less noisy. We
consider three types of learners, depending on the level of side-information
they receive. The first two, referred to as strong/weak-oracle learners,
receive high/low degrees of information about the models, and use these to
learn. The third, a fully adaptive learner, estimates the target parameter
vector without any prior information. In the single source case, we propose an
elimination learning method, whose risk matches that of a strong-oracle
learner. In the multiple source case, we advocate that the risk of the
weak-oracle learner is a realistic benchmark for the risk of adaptive learners.
We develop an adaptive multiple elimination-rounds CL algorithm, and
characterize instance-dependent conditions for its risk to match that of the
weak-oracle learner. We consider instance-dependent minimax lower bounds, and
discuss the challenges associated with defining the class of instances for the
bound. We derive two minimax lower bounds, and determine the conditions under
which the performance weak-oracle learner is minimax optimal. | [
"cs.LG",
"stat.ML"
] | false |
2402.13379 | 2024-02-20T21:09:04Z | Referee-Meta-Learning for Fast Adaptation of Locational Fairness | [
"Weiye Chen",
"Yiqun Xie",
"Xiaowei Jia",
"Erhu He",
"Han Bao",
"Bang An",
"Xun Zhou"
] | When dealing with data from distinct locations, machine learning algorithms
tend to demonstrate an implicit preference of some locations over the others,
which constitutes biases that sabotage the spatial fairness of the algorithm.
This unfairness can easily introduce biases in subsequent decision-making given
broad adoptions of learning-based solutions in practice. However, locational
biases in AI are largely understudied. To mitigate biases over locations, we
propose a locational meta-referee (Meta-Ref) to oversee the few-shot
meta-training and meta-testing of a deep neural network. Meta-Ref dynamically
adjusts the learning rates for training samples of given locations to advocate
a fair performance across locations, through an explicit consideration of
locational biases and the characteristics of input data. We present a
three-phase training framework to learn both a meta-learning-based predictor
and an integrated Meta-Ref that governs the fairness of the model. Once trained
with a distribution of spatial tasks, Meta-Ref is applied to samples from new
spatial tasks (i.e., regions outside the training area) to promote fairness
during the fine-tune step. We carried out experiments with two case studies on
crop monitoring and transportation safety, which show Meta-Ref can improve
locational fairness while keeping the overall prediction quality at a similar
level. | [
"cs.LG",
"cs.CY"
] | false |
2402.13393 | 2024-02-20T21:49:36Z | Fairness Risks for Group-conditionally Missing Demographics | [
"Kaiqi Jiang",
"Wenzhe Fan",
"Mao Li",
"Xinhua Zhang"
] | Fairness-aware classification models have gained increasing attention in
recent years as concerns grow on discrimination against some demographic
groups. Most existing models require full knowledge of the sensitive features,
which can be impractical due to privacy, legal issues, and an individual's fear
of discrimination. The key challenge we will address is the group dependency of
the unavailability, e.g., people of some age range may be more reluctant to
reveal their age. Our solution augments general fairness risks with
probabilistic imputations of the sensitive features, while jointly learning the
group-conditionally missing probabilities in a variational auto-encoder. Our
model is demonstrated effective on both image and tabular datasets, achieving
an improved balance between accuracy and fairness. | [
"cs.LG",
"cs.CY"
] | false |
2402.13400 | 2024-02-20T21:59:41Z | The Dimension of Self-Directed Learning | [
"Pramith Devulapalli",
"Steve Hanneke"
] | Understanding the self-directed learning complexity has been an important
problem that has captured the attention of the online learning theory community
since the early 1990s. Within this framework, the learner is allowed to
adaptively choose its next data point in making predictions unlike the setting
in adversarial online learning.
In this paper, we study the self-directed learning complexity in both the
binary and multi-class settings, and we develop a dimension, namely $SDdim$,
that exactly characterizes the self-directed learning mistake-bound for any
concept class. The intuition behind $SDdim$ can be understood as a two-player
game called the "labelling game". Armed with this two-player game, we calculate
$SDdim$ on a whole host of examples with notable results on axis-aligned
rectangles, VC dimension $1$ classes, and linear separators. We demonstrate
several learnability gaps with a central focus on self-directed learning and
offline sequence learning models that include either the best or worst
ordering. Finally, we extend our analysis to the self-directed binary agnostic
setting where we derive upper and lower bounds. | [
"stat.ML",
"cs.LG"
] | false |
2402.13410 | 2024-02-20T22:34:53Z | Bayesian Neural Networks with Domain Knowledge Priors | [
"Dylan Sam",
"Rattana Pukdee",
"Daniel P. Jeong",
"Yewon Byun",
"J. Zico Kolter"
] | Bayesian neural networks (BNNs) have recently gained popularity due to their
ability to quantify model uncertainty. However, specifying a prior for BNNs
that captures relevant domain knowledge is often extremely challenging. In this
work, we propose a framework for integrating general forms of domain knowledge
(i.e., any knowledge that can be represented by a loss function) into a BNN
prior through variational inference, while enabling computationally efficient
posterior inference and sampling. Specifically, our approach results in a prior
over neural network weights that assigns high probability mass to models that
better align with our domain knowledge, leading to posterior samples that also
exhibit this behavior. We show that BNNs using our proposed domain knowledge
priors outperform those with standard priors (e.g., isotropic Gaussian,
Gaussian process), successfully incorporating diverse types of prior
information such as fairness, physics rules, and healthcare knowledge and
achieving better predictive performance. We also present techniques for
transferring the learned priors across different model architectures,
demonstrating their broad utility across various settings. | [
"cs.LG",
"stat.ML"
] | false |
2402.13418 | 2024-02-20T23:06:21Z | EvolMPNN: Predicting Mutational Effect on Homologous Proteins by
Evolution Encoding | [
"Zhiqiang Zhong",
"Davide Mottin"
] | Predicting protein properties is paramount for biological and medical
advancements. Current protein engineering mutates on a typical protein, called
the wild-type, to construct a family of homologous proteins and study their
properties. Yet, existing methods easily neglect subtle mutations, failing to
capture the effect on the protein properties. To this end, we propose EvolMPNN,
Evolution-aware Message Passing Neural Network, to learn evolution-aware
protein embeddings. EvolMPNN samples sets of anchor proteins, computes
evolutionary information by means of residues and employs a differentiable
evolution-aware aggregation scheme over these sampled anchors. This way
EvolMPNNcan capture the mutation effect on proteins with respect to the anchor
proteins. Afterwards, the aggregated evolution-aware embeddings are integrated
with sequence embeddings to generate final comprehensive protein embeddings.
Our model shows up to 6.4% better than state-of-the-art methods and attains 36x
inference speedup in comparison with large pre-trained models. | [
"cs.LG",
"q-bio.BM"
] | false |
2402.13421 | 2024-02-20T23:20:36Z | Context-Aware Quantitative Risk Assessment Machine Learning Model for
Drivers Distraction | [
"Adebamigbe Fasanmade",
"Ali H. Al-Bayatti",
"Jarrad Neil Morden",
"Fabio Caraffini"
] | Risk mitigation techniques are critical to avoiding accidents associated with
driving behaviour. We provide a novel Multi-Class Driver Distraction Risk
Assessment (MDDRA) model that considers the vehicle, driver, and environmental
data during a journey. MDDRA categorises the driver on a risk matrix as safe,
careless, or dangerous. It offers flexibility in adjusting the parameters and
weights to consider each event on a specific severity level. We collect
real-world data using the Field Operation Test (TeleFOT), covering drivers
using the same routes in the East Midlands, United Kingdom (UK). The results
show that reducing road accidents caused by driver distraction is possible. We
also study the correlation between distraction (driver, vehicle, and
environment) and the classification severity based on a continuous distraction
severity score. Furthermore, we apply machine learning techniques to classify
and predict driver distraction according to severity levels to aid the
transition of control from the driver to the vehicle (vehicle takeover) when a
situation is deemed risky. The Ensemble Bagged Trees algorithm performed best,
with an accuracy of 96.2%. | [
"cs.LG",
"cs.CY"
] | false |
2402.15526 | 2024-02-20T08:03:05Z | Chain-of-Specificity: An Iteratively Refining Method for Eliciting
Knowledge from Large Language Models | [
"Kaiwen Wei",
"Jingyuan Zhang",
"Hongzhi Zhang",
"Fuzheng Zhang",
"Di Zhang",
"Li Jin",
"Yue Yu"
] | Large Language Models (LLMs) exhibit remarkable generative capabilities,
enabling the generation of valuable information. Despite these advancements,
previous research found that LLMs sometimes struggle with adhering to specific
constraints (e.g., in specific place or at specific time), at times even
overlooking them, which leads to responses that are either too generic or not
fully satisfactory. Existing approaches attempted to address this issue by
decomposing or rewriting input instructions, yet they fall short in adequately
emphasizing specific constraints and in unlocking the underlying knowledge
(e.g., programming within the context of software development). In response,
this paper proposes a simple yet effective method named Chain-of-Specificity
(CoS). Specifically, CoS iteratively emphasizes the specific constraints in the
input instructions, unlocks knowledge within LLMs, and refines responses.
Experiments conducted on publicly available and self-build complex datasets
demonstrate that CoS outperforms existing methods in enhancing generated
content especially for the specificity. Besides, as the number of specific
constraints increase, other baselines falter, while CoS still performs well.
Moreover, we show that distilling responses generated by CoS effectively
enhances the ability of smaller models to follow the constrained instructions.
Resources of this paper will be released for further research. | [
"cs.AI",
"cs.LG"
] | false |
2403.14638 | 2024-02-20T10:38:38Z | Personalized Programming Guidance based on Deep Programming Learning
Style Capturing | [
"Yingfan Liu",
"Renyu Zhu",
"Ming Gao"
] | With the rapid development of big data and AI technology, programming is in
high demand and has become an essential skill for students. Meanwhile,
researchers also focus on boosting the online judging system's guidance ability
to reduce students' dropout rates. Previous studies mainly targeted at
enhancing learner engagement on online platforms by providing personalized
recommendations. However, two significant challenges still need to be addressed
in programming: C1) how to recognize complex programming behaviors; C2) how to
capture intrinsic learning patterns that align with the actual learning
process. To fill these gaps, in this paper, we propose a novel model called
Programming Exercise Recommender with Learning Style (PERS), which simulates
learners' intricate programming behaviors. Specifically, since programming is
an iterative and trial-and-error process, we first introduce a positional
encoding and a differentiating module to capture the changes of consecutive
code submissions (which addresses C1). To better profile programming behaviors,
we extend the Felder-Silverman learning style model, a classical pedagogical
theory, to perceive intrinsic programming patterns. Based on this, we align
three latent vectors to record and update programming ability, processing
style, and understanding style, respectively (which addresses C2). We perform
extensive experiments on two real-world datasets to verify the rationality of
modeling programming learning styles and the effectiveness of PERS for
personalized programming guidance. | [
"cs.CY",
"cs.LG"
] | false |
2402.12710 | 2024-02-20T04:13:59Z | Integrating Active Learning in Causal Inference with Interference: A
Novel Approach in Online Experiments | [
"Hongtao Zhu",
"Sizhe Zhang",
"Yang Su",
"Zhenyu Zhao",
"Nan Chen"
] | In the domain of causal inference research, the prevalent potential outcomes
framework, notably the Rubin Causal Model (RCM), often overlooks individual
interference and assumes independent treatment effects. This assumption,
however, is frequently misaligned with the intricate realities of real-world
scenarios, where interference is not merely a possibility but a common
occurrence. Our research endeavors to address this discrepancy by focusing on
the estimation of direct and spillover treatment effects under two assumptions:
(1) network-based interference, where treatments on neighbors within connected
networks affect one's outcomes, and (2) non-random treatment assignments
influenced by confounders. To improve the efficiency of estimating potentially
complex effects functions, we introduce an novel active learning approach:
Active Learning in Causal Inference with Interference (ACI). This approach uses
Gaussian process to flexibly model the direct and spillover treatment effects
as a function of a continuous measure of neighbors' treatment assignment. The
ACI framework sequentially identifies the experimental settings that demand
further data. It further optimizes the treatment assignments under the network
interference structure using genetic algorithms to achieve efficient learning
outcome. By applying our method to simulation data and a Tencent game dataset,
we demonstrate its feasibility in achieving accurate effects estimations with
reduced data requirements. This ACI approach marks a significant advancement in
the realm of data efficiency for causal inference, offering a robust and
efficient alternative to traditional methodologies, particularly in scenarios
characterized by complex interference patterns. | [
"stat.ME",
"cs.LG",
"stat.ML"
] | false |
2402.12794 | 2024-02-20T08:08:07Z | Autonomous Reality Modelling for Cultural Heritage Sites employing
cooperative quadrupedal robots and unmanned aerial vehicles | [
"Nikolaos Giakoumidis",
"Christos-Nikolaos Anagnostopoulos"
] | Nowadays, the use of advanced sensors, such as terrestrial 3D laser scanners,
mobile LiDARs and Unmanned Aerial Vehicles (UAV) photogrammetric imaging, has
become the prevalent practice for 3D Reality Modeling and digitization of
large-scale monuments of Cultural Heritage (CH). In practice, this process is
heavily related to the expertise of the surveying team, handling the laborious
planning and time-consuming execution of the 3D mapping process that is
tailored to the specific requirements and constraints of each site. To minimize
human intervention, this paper introduces a novel methodology for autonomous 3D
Reality Modeling for CH monuments by employing au-tonomous biomimetic
quadrupedal robotic agents and UAVs equipped with the appropriate sensors.
These autonomous robotic agents carry out the 3D RM process in a systematic and
repeatable ap-proach. The outcomes of this automated process may find
applications in digital twin platforms, facilitating secure monitoring and
management of cultural heritage sites and spaces, in both indoor and outdoor
environments. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2402.12828 | 2024-02-20T08:54:07Z | SGD with Clipping is Secretly Estimating the Median Gradient | [
"Fabian Schaipp",
"Guillaume Garrigos",
"Umut Simsekli",
"Robert Gower"
] | There are several applications of stochastic optimization where one can
benefit from a robust estimate of the gradient. For example, domains such as
distributed learning with corrupted nodes, the presence of large outliers in
the training data, learning under privacy constraints, or even heavy-tailed
noise due to the dynamics of the algorithm itself. Here we study SGD with
robust gradient estimators based on estimating the median. We first consider
computing the median gradient across samples, and show that the resulting
method can converge even under heavy-tailed, state-dependent noise. We then
derive iterative methods based on the stochastic proximal point method for
computing the geometric median and generalizations thereof. Finally we propose
an algorithm estimating the median gradient across iterations, and find that
several well known methods - in particular different forms of clipping - are
particular cases of this framework. | [
"stat.ML",
"cs.LG",
"math.OC",
"90C26, 68T07, 62-08"
] | false |
2402.12854 | 2024-02-20T09:33:22Z | Differentiable Mapper For Topological Optimization Of Data
Representation | [
"Ziyad Oulhaj",
"Mathieu Carrière",
"Bertrand Michel"
] | Unsupervised data representation and visualization using tools from topology
is an active and growing field of Topological Data Analysis (TDA) and data
science. Its most prominent line of work is based on the so-called Mapper
graph, which is a combinatorial graph whose topological structures (connected
components, branches, loops) are in correspondence with those of the data
itself. While highly generic and applicable, its use has been hampered so far
by the manual tuning of its many parameters-among these, a crucial one is the
so-called filter: it is a continuous function whose variations on the data set
are the main ingredient for both building the Mapper representation and
assessing the presence and sizes of its topological structures. However, while
a few parameter tuning methods have already been investigated for the other
Mapper parameters (i.e., resolution, gain, clustering), there is currently no
method for tuning the filter itself. In this work, we build on a recently
proposed optimization framework incorporating topology to provide the first
filter optimization scheme for Mapper graphs. In order to achieve this, we
propose a relaxed and more general version of the Mapper graph, whose
convergence properties are investigated. Finally, we demonstrate the usefulness
of our approach by optimizing Mapper graph representations on several datasets,
and showcasing the superiority of the optimized representation over arbitrary
ones. | [
"cs.LG",
"cs.CG",
"math.AT"
] | false |
2402.12875 | 2024-02-20T10:11:03Z | Chain of Thought Empowers Transformers to Solve Inherently Serial
Problems | [
"Zhiyuan Li",
"Hong Liu",
"Denny Zhou",
"Tengyu Ma"
] | Instructing the model to generate a sequence of intermediate steps, a.k.a., a
chain of thought (CoT), is a highly effective method to improve the accuracy of
large language models (LLMs) on arithmetics and symbolic reasoning tasks.
However, the mechanism behind CoT remains unclear. This work provides a
theoretical understanding of the power of CoT for decoder-only transformers
through the lens of expressiveness. Conceptually, CoT empowers the model with
the ability to perform inherently serial computation, which is otherwise
lacking in transformers, especially when depth is low. Given input length $n$,
previous works have shown that constant-depth transformers with finite
precision $\mathsf{poly}(n)$ embedding size can only solve problems in
$\mathsf{TC}^0$ without CoT. We first show an even tighter expressiveness upper
bound for constant-depth transformers with constant-bit precision, which can
only solve problems in $\mathsf{AC}^0$, a proper subset of $ \mathsf{TC}^0$.
However, with $T$ steps of CoT, constant-depth transformers using constant-bit
precision and $O(\log n)$ embedding size can solve any problem solvable by
boolean circuits of size $T$. Empirically, enabling CoT dramatically improves
the accuracy for tasks that are hard for parallel computation, including the
composition of permutation groups, iterated squaring, and circuit value
problems, especially for low-depth transformers. | [
"cs.LG",
"cs.CC",
"stat.ML"
] | false |
2402.12954 | 2024-02-20T12:17:01Z | Conditional Logical Message Passing Transformer for Complex Query
Answering | [
"Chongzhi Zhang",
"Zhiping Peng",
"Junhao Zheng",
"Qianli Ma"
] | Complex Query Answering (CQA) over Knowledge Graphs (KGs) is a challenging
task. Given that KGs are usually incomplete, neural models are proposed to
solve CQA by performing multi-hop logical reasoning. However, most of them
cannot perform well on both one-hop and multi-hop queries simultaneously.
Recent work proposes a logical message passing mechanism based on the
pre-trained neural link predictors. While effective on both one-hop and
multi-hop queries, it ignores the difference between the constant and variable
nodes in a query graph. In addition, during the node embedding update stage,
this mechanism cannot dynamically measure the importance of different messages,
and whether it can capture the implicit logical dependencies related to a node
and received messages remains unclear. In this paper, we propose Conditional
Logical Message Passing Transformer (CLMPT), which considers the difference
between constants and variables in the case of using pre-trained neural link
predictors and performs message passing conditionally on the node type. We
empirically verified that this approach can reduce computational costs without
affecting performance. Furthermore, CLMPT uses the transformer to aggregate
received messages and update the corresponding node embedding. Through the
self-attention mechanism, CLMPT can assign adaptive weights to elements in an
input set consisting of received messages and the corresponding node and
explicitly model logical dependencies between various elements. Experimental
results show that CLMPT is a new state-of-the-art neural CQA model. | [
"cs.LG",
"cs.AI",
"cs.LO"
] | false |
2402.12993 | 2024-02-20T13:21:46Z | An Autonomous Large Language Model Agent for Chemical Literature Data
Mining | [
"Kexin Chen",
"Hanqun Cao",
"Junyou Li",
"Yuyang Du",
"Menghao Guo",
"Xin Zeng",
"Lanqing Li",
"Jiezhong Qiu",
"Pheng Ann Heng",
"Guangyong Chen"
] | Chemical synthesis, which is crucial for advancing material synthesis and
drug discovery, impacts various sectors including environmental science and
healthcare. The rise of technology in chemistry has generated extensive
chemical data, challenging researchers to discern patterns and refine synthesis
processes. Artificial intelligence (AI) helps by analyzing data to optimize
synthesis and increase yields. However, AI faces challenges in processing
literature data due to the unstructured format and diverse writing style of
chemical literature. To overcome these difficulties, we introduce an end-to-end
AI agent framework capable of high-fidelity extraction from extensive chemical
literature. This AI agent employs large language models (LLMs) for prompt
generation and iterative optimization. It functions as a chemistry assistant,
automating data collection and analysis, thereby saving manpower and enhancing
performance. Our framework's efficacy is evaluated using accuracy, recall, and
F1 score of reaction condition data, and we compared our method with human
experts in terms of content correctness and time efficiency. The proposed
approach marks a significant advancement in automating chemical literature
extraction and demonstrates the potential for AI to revolutionize data
management and utilization in chemistry. | [
"cs.IR",
"cs.AI",
"cs.LG",
"q-bio.QM"
] | false |
2402.13019 | 2024-02-20T14:01:26Z | Improving Neural-based Classification with Logical Background Knowledge | [
"Arthur Ledaguenel",
"Céline Hudelot",
"Mostepha Khouadjia"
] | Neurosymbolic AI is a growing field of research aiming to combine neural
networks learning capabilities with the reasoning abilities of symbolic
systems. This hybridization can take many shapes. In this paper, we propose a
new formalism for supervised multi-label classification with propositional
background knowledge. We introduce a new neurosymbolic technique called
semantic conditioning at inference, which only constrains the system during
inference while leaving the training unaffected. We discuss its theoritical and
practical advantages over two other popular neurosymbolic techniques: semantic
conditioning and semantic regularization. We develop a new multi-scale
methodology to evaluate how the benefits of a neurosymbolic technique evolve
with the scale of the network. We then evaluate experimentally and compare the
benefits of all three techniques across model scales on several datasets. Our
results demonstrate that semantic conditioning at inference can be used to
build more accurate neural-based systems with fewer resources while
guaranteeing the semantic consistency of outputs. | [
"cs.AI",
"cs.LG",
"cs.SC"
] | false |
2402.13033 | 2024-02-20T14:18:43Z | Enhancing Real-World Complex Network Representations with Hyperedge
Augmentation | [
"Xiangyu Zhao",
"Zehui Li",
"Mingzhu Shen",
"Guy-Bart Stan",
"Pietro Liò",
"Yiren Zhao"
] | Graph augmentation methods play a crucial role in improving the performance
and enhancing generalisation capabilities in Graph Neural Networks (GNNs).
Existing graph augmentation methods mainly perturb the graph structures and are
usually limited to pairwise node relations. These methods cannot fully address
the complexities of real-world large-scale networks that often involve
higher-order node relations beyond only being pairwise. Meanwhile, real-world
graph datasets are predominantly modelled as simple graphs, due to the scarcity
of data that can be used to form higher-order edges. Therefore, reconfiguring
the higher-order edges as an integration into graph augmentation strategies
lights up a promising research path to address the aforementioned issues. In
this paper, we present Hyperedge Augmentation (HyperAug), a novel graph
augmentation method that constructs virtual hyperedges directly form the raw
data, and produces auxiliary node features by extracting from the virtual
hyperedge information, which are used for enhancing GNN performances on
downstream tasks. We design three diverse virtual hyperedge construction
strategies to accompany the augmentation scheme: (1) via graph statistics, (2)
from multiple data perspectives, and (3) utilising multi-modality. Furthermore,
to facilitate HyperAug evaluation, we provide 23 novel real-world graph
datasets across various domains including social media, biology, and
e-commerce. Our empirical study shows that HyperAug consistently and
significantly outperforms GNN baselines and other graph augmentation methods,
across a variety of application contexts, which clearly indicates that it can
effectively incorporate higher-order node relations into graph augmentation
methods for real-world complex networks. | [
"cs.LG",
"cs.IR",
"cs.SI"
] | false |
2402.13076 | 2024-02-20T15:22:25Z | Not All Weights Are Created Equal: Enhancing Energy Efficiency in
On-Device Streaming Speech Recognition | [
"Yang Li",
"Yuan Shangguan",
"Yuhao Wang",
"Liangzhen Lai",
"Ernie Chang",
"Changsheng Zhao",
"Yangyang Shi",
"Vikas Chandra"
] | Power consumption plays an important role in on-device streaming speech
recognition, as it has a direct impact on the user experience. This study
delves into how weight parameters in speech recognition models influence the
overall power consumption of these models. We discovered that the impact of
weight parameters on power consumption varies, influenced by factors including
how often they are invoked and their placement in memory. Armed with this
insight, we developed design guidelines aimed at optimizing on-device speech
recognition models. These guidelines focus on minimizing power use without
substantially affecting accuracy. Our method, which employs targeted
compression based on the varying sensitivities of weight parameters,
demonstrates superior performance compared to state-of-the-art compression
methods. It achieves a reduction in energy usage of up to 47% while maintaining
similar model accuracy and improving the real-time factor. | [
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2402.13077 | 2024-02-20T15:23:24Z | Mechanistic Neural Networks for Scientific Machine Learning | [
"Adeel Pervez",
"Francesco Locatello",
"Efstratios Gavves"
] | This paper presents Mechanistic Neural Networks, a neural network design for
machine learning applications in the sciences. It incorporates a new
Mechanistic Block in standard architectures to explicitly learn governing
differential equations as representations, revealing the underlying dynamics of
data and enhancing interpretability and efficiency in data modeling. Central to
our approach is a novel Relaxed Linear Programming Solver (NeuRLP) inspired by
a technique that reduces solving linear ODEs to solving linear programs. This
integrates well with neural networks and surpasses the limitations of
traditional ODE solvers enabling scalable GPU parallel processing. Overall,
Mechanistic Neural Networks demonstrate their versatility for scientific
machine learning applications, adeptly managing tasks from equation discovery
to dynamic systems modeling. We prove their comprehensive capabilities in
analyzing and interpreting complex scientific data across various applications,
showing significant performance against specialized state-of-the-art methods. | [
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2402.13101 | 2024-02-20T15:54:24Z | A Microstructure-based Graph Neural Network for Accelerating Multiscale
Simulations | [
"J. Storm",
"I. B. C. M. Rocha",
"F. P. van der Meer"
] | Simulating the mechanical response of advanced materials can be done more
accurately using concurrent multiscale models than with single-scale
simulations. However, the computational costs stand in the way of the practical
application of this approach. The costs originate from microscale Finite
Element (FE) models that must be solved at every macroscopic integration point.
A plethora of surrogate modeling strategies attempt to alleviate this cost by
learning to predict macroscopic stresses from macroscopic strains, completely
replacing the microscale models. In this work, we introduce an alternative
surrogate modeling strategy that allows for keeping the multiscale nature of
the problem, allowing it to be used interchangeably with an FE solver for any
time step. Our surrogate provides all microscopic quantities, which are then
homogenized to obtain macroscopic quantities of interest. We achieve this for
an elasto-plastic material by predicting full-field microscopic strains using a
graph neural network (GNN) while retaining the microscopic constitutive
material model to obtain the stresses. This hybrid data-physics graph-based
approach avoids the high dimensionality originating from predicting full-field
responses while allowing non-locality to arise. By training the GNN on a
variety of meshes, it learns to generalize to unseen meshes, allowing a single
model to be used for a range of microstructures. The embedded microscopic
constitutive model in the GNN implicitly tracks history-dependent variables and
leads to improved accuracy. We demonstrate for several challenging scenarios
that the surrogate can predict complex macroscopic stress-strain paths. As the
computation time of our method scales favorably with the number of elements in
the microstructure compared to the FE method, our method can significantly
accelerate FE2 simulations. | [
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2402.13103 | 2024-02-20T15:58:45Z | Multivariate Functional Linear Discriminant Analysis for the
Classification of Short Time Series with Missing Data | [
"Rahul Bordoloi",
"Clémence Réda",
"Orell Trautmann",
"Saptarshi Bej",
"Olaf Wolkenhauer"
] | Functional linear discriminant analysis (FLDA) is a powerful tool that
extends LDA-mediated multiclass classification and dimension reduction to
univariate time-series functions. However, in the age of large multivariate and
incomplete data, statistical dependencies between features must be estimated in
a computationally tractable way, while also dealing with missing data. There is
a need for a computationally tractable approach that considers the statistical
dependencies between features and can handle missing values. We here develop a
multivariate version of FLDA (MUDRA) to tackle this issue and describe an
efficient expectation/conditional-maximization (ECM) algorithm to infer its
parameters. We assess its predictive power on the "Articulary Word Recognition"
data set and show its improvement over the state-of-the-art, especially in the
case of missing data. MUDRA allows interpretable classification of data sets
with large proportions of missing data, which will be particularly useful for
medical or psychological data sets. | [
"cs.LG",
"math.ST",
"stat.TH",
"62R10 (Primary), 62R07 (Secondary)"
] | false |
2402.13106 | 2024-02-20T16:01:39Z | On Generalization Bounds for Deep Compound Gaussian Neural Networks | [
"Carter Lyons",
"Raghu G. Raj",
"Margaret Cheney"
] | Algorithm unfolding or unrolling is the technique of constructing a deep
neural network (DNN) from an iterative algorithm. Unrolled DNNs often provide
better interpretability and superior empirical performance over standard DNNs
in signal estimation tasks. An important theoretical question, which has only
recently received attention, is the development of generalization error bounds
for unrolled DNNs. These bounds deliver theoretical and practical insights into
the performance of a DNN on empirical datasets that are distinct from, but
sampled from, the probability density generating the DNN training data. In this
paper, we develop novel generalization error bounds for a class of unrolled
DNNs that are informed by a compound Gaussian prior. These compound Gaussian
networks have been shown to outperform comparative standard and unfolded deep
neural networks in compressive sensing and tomographic imaging problems. The
generalization error bound is formulated by bounding the Rademacher complexity
of the class of compound Gaussian network estimates with Dudley's integral.
Under realistic conditions, we show that, at worst, the generalization error
scales $\mathcal{O}(n\sqrt{\ln(n)})$ in the signal dimension and
$\mathcal{O}(($Network Size$)^{3/2})$ in network size. | [
"stat.ML",
"cs.LG",
"eess.SP"
] | false |
2402.13182 | 2024-02-20T17:49:10Z | Order-Optimal Regret in Distributed Kernel Bandits using Uniform
Sampling with Shared Randomness | [
"Nikola Pavlovic",
"Sudeep Salgia",
"Qing Zhao"
] | We consider distributed kernel bandits where $N$ agents aim to
collaboratively maximize an unknown reward function that lies in a reproducing
kernel Hilbert space. Each agent sequentially queries the function to obtain
noisy observations at the query points. Agents can share information through a
central server, with the objective of minimizing regret that is accumulating
over time $T$ and aggregating over agents. We develop the first algorithm that
achieves the optimal regret order (as defined by centralized learning) with a
communication cost that is sublinear in both $N$ and $T$. The key features of
the proposed algorithm are the uniform exploration at the local agents and
shared randomness with the central server. Working together with the sparse
approximation of the GP model, these two key components make it possible to
preserve the learning rate of the centralized setting at a diminishing rate of
communication. | [
"cs.LG",
"cs.DC",
"stat.ML"
] | false |
2402.13187 | 2024-02-20T17:53:24Z | Testing Calibration in Subquadratic Time | [
"Lunjia Hu",
"Kevin Tian",
"Chutong Yang"
] | In the recent literature on machine learning and decision making, calibration
has emerged as a desirable and widely-studied statistical property of the
outputs of binary prediction models. However, the algorithmic aspects of
measuring model calibration have remained relatively less well-explored.
Motivated by [BGHN23], which proposed a rigorous framework for measuring
distances to calibration, we initiate the algorithmic study of calibration
through the lens of property testing. We define the problem of calibration
testing from samples where given $n$ draws from a distribution $\mathcal{D}$ on
(predictions, binary outcomes), our goal is to distinguish between the case
where $\mathcal{D}$ is perfectly calibrated, and the case where $\mathcal{D}$
is $\varepsilon$-far from calibration.
We design an algorithm based on approximate linear programming, which solves
calibration testing information-theoretically optimally (up to constant
factors) in time $O(n^{1.5} \log(n))$. This improves upon state-of-the-art
black-box linear program solvers requiring $\Omega(n^\omega)$ time, where
$\omega > 2$ is the exponent of matrix multiplication. We also develop
algorithms for tolerant variants of our testing problem, and give sample
complexity lower bounds for alternative calibration distances to the one
considered in this work. Finally, we present preliminary experiments showing
that the testing problem we define faithfully captures standard notions of
calibration, and that our algorithms scale to accommodate moderate sample
sizes. | [
"cs.LG",
"cs.DS",
"stat.CO",
"stat.ML"
] | false |
2402.13201 | 2024-02-20T18:10:39Z | Tiny Reinforcement Learning for Quadruped Locomotion using Decision
Transformers | [
"Orhan Eren Akgün",
"Néstor Cuevas",
"Matheus Farias",
"Daniel Garces"
] | Resource-constrained robotic platforms are particularly useful for tasks that
require low-cost hardware alternatives due to the risk of losing the robot,
like in search-and-rescue applications, or the need for a large number of
devices, like in swarm robotics. For this reason, it is crucial to find
mechanisms for adapting reinforcement learning techniques to the constraints
imposed by lower computational power and smaller memory capacities of these
ultra low-cost robotic platforms. We try to address this need by proposing a
method for making imitation learning deployable onto resource-constrained
robotic platforms. Here we cast the imitation learning problem as a conditional
sequence modeling task and we train a decision transformer using expert
demonstrations augmented with a custom reward. Then, we compress the resulting
generative model using software optimization schemes, including quantization
and pruning. We test our method in simulation using Isaac Gym, a realistic
physics simulation environment designed for reinforcement learning. We
empirically demonstrate that our method achieves natural looking gaits for
Bittle, a resource-constrained quadruped robot. We also run multiple
simulations to show the effects of pruning and quantization on the performance
of the model. Our results show that quantization (down to 4 bits) and pruning
reduce model size by around 30\% while maintaining a competitive reward, making
the model deployable in a resource-constrained system. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2402.13425 | 2024-02-20T23:29:41Z | Investigating the Histogram Loss in Regression | [
"Ehsan Imani",
"Kai Luedemann",
"Sam Scholnick-Hughes",
"Esraa Elelimy",
"Martha White"
] | It is becoming increasingly common in regression to train neural networks
that model the entire distribution even if only the mean is required for
prediction. This additional modeling often comes with performance gain and the
reasons behind the improvement are not fully known. This paper investigates a
recent approach to regression, the Histogram Loss, which involves learning the
conditional distribution of the target variable by minimizing the cross-entropy
between a target distribution and a flexible histogram prediction. We design
theoretical and empirical analyses to determine why and when this performance
gain appears, and how different components of the loss contribute to it. Our
results suggest that the benefits of learning distributions in this setup come
from improvements in optimization rather than learning a better representation.
We then demonstrate the viability of the Histogram Loss in common deep learning
applications without a need for costly hyperparameter tuning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2402.13429 | 2024-02-20T23:45:37Z | Everything You Always Wanted to Know About Storage Compressibility of
Pre-Trained ML Models but Were Afraid to Ask | [
"Zhaoyuan Su",
"Ammar Ahmed",
"Zirui Wang",
"Ali Anwar",
"Yue Cheng"
] | As the number of pre-trained machine learning (ML) models is growing
exponentially, data reduction tools are not catching up. Existing data
reduction techniques are not specifically designed for pre-trained model (PTM)
dataset files. This is largely due to a lack of understanding of the patterns
and characteristics of these datasets, especially those relevant to data
reduction and compressibility.
This paper presents the first, exhaustive analysis to date of PTM datasets on
storage compressibility. Our analysis spans different types of data reduction
and compression techniques, from hash-based data deduplication, data similarity
detection, to dictionary-coding compression. Our analysis explores these
techniques at three data granularity levels, from model layers, model chunks,
to model parameters. We draw new observations that indicate that modern data
reduction tools are not effective when handling PTM datasets. There is a
pressing need for new compression methods that take into account PTMs' data
characteristics for effective storage reduction.
Motivated by our findings, we design ELF, a simple yet effective,
error-bounded, lossy floating-point compression method. ELF transforms
floating-point parameters in such a way that the common exponent field of the
transformed parameters can be completely eliminated to save storage space. We
develop Elves, a compression framework that integrates ELF along with several
other data reduction methods. Elves uses the most effective method to compress
PTMs that exhibit different patterns. Evaluation shows that Elves achieves an
overall compression ratio of $1.52\times$, which is $1.31\times$, $1.32\times$
and $1.29\times$ higher than a general-purpose compressor (zstd), an
error-bounded lossy compressor (SZ3), and the uniform model quantization,
respectively, with negligible model accuracy loss. | [
"cs.DB",
"cs.LG",
"cs.OS",
"H.2.7"
] | false |
2402.13430 | 2024-02-20T23:49:25Z | LinkSAGE: Optimizing Job Matching Using Graph Neural Networks | [
"Ping Liu",
"Haichao Wei",
"Xiaochen Hou",
"Jianqiang Shen",
"Shihai He",
"Kay Qianqi Shen",
"Zhujun Chen",
"Fedor Borisyuk",
"Daniel Hewlett",
"Liang Wu",
"Srikant Veeraraghavan",
"Alex Tsun",
"Chengming Jiang",
"Wenjing Zhang"
] | We present LinkSAGE, an innovative framework that integrates Graph Neural
Networks (GNNs) into large-scale personalized job matching systems, designed to
address the complex dynamics of LinkedIns extensive professional network. Our
approach capitalizes on a novel job marketplace graph, the largest and most
intricate of its kind in industry, with billions of nodes and edges. This graph
is not merely extensive but also richly detailed, encompassing member and job
nodes along with key attributes, thus creating an expansive and interwoven
network. A key innovation in LinkSAGE is its training and serving methodology,
which effectively combines inductive graph learning on a heterogeneous,
evolving graph with an encoder-decoder GNN model. This methodology decouples
the training of the GNN model from that of existing Deep Neural Nets (DNN)
models, eliminating the need for frequent GNN retraining while maintaining
up-to-date graph signals in near realtime, allowing for the effective
integration of GNN insights through transfer learning. The subsequent nearline
inference system serves the GNN encoder within a real-world setting,
significantly reducing online latency and obviating the need for costly
real-time GNN infrastructure. Validated across multiple online A/B tests in
diverse product scenarios, LinkSAGE demonstrates marked improvements in member
engagement, relevance matching, and member retention, confirming its
generalizability and practical impact. | [
"cs.LG",
"cs.AI",
"cs.SI"
] | false |
2402.14029 | 2024-02-20T03:14:45Z | Partial Search in a Frozen Network is Enough to Find a Strong Lottery
Ticket | [
"Hikari Otsuka",
"Daiki Chijiwa",
"Ángel López García-Arias",
"Yasuyuki Okoshi",
"Kazushi Kawamura",
"Thiem Van Chu",
"Daichi Fujiki",
"Susumu Takeuchi",
"Masato Motomura"
] | Randomly initialized dense networks contain subnetworks that achieve high
accuracy without weight learning -- strong lottery tickets (SLTs). Recently,
Gadhikar et al. (2023) demonstrated theoretically and experimentally that SLTs
can also be found within a randomly pruned source network, thus reducing the
SLT search space. However, this limits the search to SLTs that are even sparser
than the source, leading to worse accuracy due to unintentionally high
sparsity. This paper proposes a method that reduces the SLT search space by an
arbitrary ratio that is independent of the desired SLT sparsity. A random
subset of the initial weights is excluded from the search space by freezing it
-- i.e., by either permanently pruning them or locking them as a fixed part of
the SLT. Indeed, the SLT existence in such a reduced search space is
theoretically guaranteed by our subset-sum approximation with randomly frozen
variables. In addition to reducing search space, the random freezing pattern
can also be exploited to reduce model size in inference. Furthermore,
experimental results show that the proposed method finds SLTs with better
accuracy and model size trade-off than the SLTs obtained from dense or randomly
pruned source networks. In particular, the SLT found in a frozen graph neural
network achieves higher accuracy than its weight trained counterpart while
reducing model size by $40.3\times$. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2402.14031 | 2024-02-20T11:34:19Z | Autoencoder with Ordered Variance for Nonlinear Model Identification | [
"Midhun T. Augustine",
"Parag Patil",
"Mani Bhushan",
"Sharad Bhartiya"
] | This paper presents a novel autoencoder with ordered variance (AEO) in which
the loss function is modified with a variance regularization term to enforce
order in the latent space. Further, the autoencoder is modified using ResNets,
which results in a ResNet AEO (RAEO). The paper also illustrates the
effectiveness of AEO and RAEO in extracting nonlinear relationships among input
variables in an unsupervised setting. | [
"eess.SY",
"cs.LG",
"cs.SY"
] | false |
2402.14859 | 2024-02-20T23:08:21Z | The Wolf Within: Covert Injection of Malice into MLLM Societies via an
MLLM Operative | [
"Zhen Tan",
"Chengshuai Zhao",
"Raha Moraffah",
"Yifan Li",
"Yu Kong",
"Tianlong Chen",
"Huan Liu"
] | Due to their unprecedented ability to process and respond to various types of
data, Multimodal Large Language Models (MLLMs) are constantly defining the new
boundary of Artificial General Intelligence (AGI). As these advanced generative
models increasingly form collaborative networks for complex tasks, the
integrity and security of these systems are crucial. Our paper, ``The Wolf
Within'', explores a novel vulnerability in MLLM societies - the indirect
propagation of malicious content. Unlike direct harmful output generation for
MLLMs, our research demonstrates how a single MLLM agent can be subtly
influenced to generate prompts that, in turn, induce other MLLM agents in the
society to output malicious content. This subtle, yet potent method of indirect
influence marks a significant escalation in the security risks associated with
MLLMs. Our findings reveal that, with minimal or even no access to MLLMs'
parameters, an MLLM agent, when manipulated to produce specific prompts or
instructions, can effectively ``infect'' other agents within a society of
MLLMs. This infection leads to the generation and circulation of harmful
outputs, such as dangerous instructions or misinformation, across the society.
We also show the transferability of these indirectly generated prompts,
highlighting their possibility in propagating malice through inter-agent
communication. This research provides a critical insight into a new dimension
of threat posed by MLLMs, where a single agent can act as a catalyst for
widespread malevolent influence. Our work underscores the urgent need for
developing robust mechanisms to detect and mitigate such covert manipulations
within MLLM societies, ensuring their safe and ethical utilization in societal
applications. Our implementation is released at
\url{https://github.com/ChengshuaiZhao0/The-Wolf-Within.git}. | [
"cs.CR",
"cs.AI",
"cs.CY",
"cs.LG"
] | false |
2403.14639 | 2024-02-20T18:34:24Z | On Defining Smart Cities using Transformer Neural Networks | [
"Andrei Khurshudov"
] | Cities worldwide are rapidly adopting smart technologies, transforming urban
life. Despite this trend, a universally accepted definition of 'smart city'
remains elusive. Past efforts to define it have not yielded a consensus, as
evidenced by the numerous definitions in use. In this paper, we endeavored to
create a new 'compromise' definition that should resonate with most experts
previously involved in defining this concept and aimed to validate one of the
existing definitions. We reviewed 60 definitions of smart cities from industry,
academia, and various relevant organizations, employing transformer
architecture-based generative AI and semantic text analysis to reach this
compromise. We proposed a semantic similarity measure as an evaluation
technique, which could generally be used to compare different smart city
definitions, assessing their uniqueness or resemblance. Our methodology
employed generative AI to analyze various existing definitions of smart cities,
generating a list of potential new composite definitions. Each of these new
definitions was then tested against the pre-existing individual definitions we
have gathered, using cosine similarity as our metric. This process identified
smart city definitions with the highest average cosine similarity, semantically
positioning them as the closest on average to all the 60 individual definitions
selected. | [
"cs.CY",
"cs.AI",
"cs.LG"
] | false |
2402.12727 | 2024-02-20T05:28:13Z | Diffusion Posterior Sampling is Computationally Intractable | [
"Shivam Gupta",
"Ajil Jalal",
"Aditya Parulekar",
"Eric Price",
"Zhiyang Xun"
] | Diffusion models are a remarkably effective way of learning and sampling from
a distribution $p(x)$. In posterior sampling, one is also given a measurement
model $p(y \mid x)$ and a measurement $y$, and would like to sample from $p(x
\mid y)$. Posterior sampling is useful for tasks such as inpainting,
super-resolution, and MRI reconstruction, so a number of recent works have
given algorithms to heuristically approximate it; but none are known to
converge to the correct distribution in polynomial time.
In this paper we show that posterior sampling is \emph{computationally
intractable}: under the most basic assumption in cryptography -- that one-way
functions exist -- there are instances for which \emph{every} algorithm takes
superpolynomial time, even though \emph{unconditional} sampling is provably
fast. We also show that the exponential-time rejection sampling algorithm is
essentially optimal under the stronger plausible assumption that there are
one-way functions that take exponential time to invert. | [
"cs.LG",
"cs.AI",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2402.13079 | 2024-02-20T15:24:21Z | Mode Estimation with Partial Feedback | [
"Charles Arnal",
"Vivien Cabannes",
"Vianney Perchet"
] | The combination of lightly supervised pre-training and online fine-tuning has
played a key role in recent AI developments. These new learning pipelines call
for new theoretical frameworks. In this paper, we formalize core aspects of
weakly supervised and active learning with a simple problem: the estimation of
the mode of a distribution using partial feedback. We show how entropy coding
allows for optimal information acquisition from partial feedback, develop
coarse sufficient statistics for mode identification, and adapt bandit
algorithms to our new setting. Finally, we combine those contributions into a
statistically and computationally efficient solution to our problem. | [
"stat.ML",
"cs.IR",
"cs.IT",
"cs.LG",
"math.IT",
"62L05, 62B86, 62D10, 62B10"
] | false |
2402.13380 | 2024-02-20T21:13:38Z | Toward TransfORmers: Revolutionizing the Solution of Mixed Integer
Programs with Transformers | [
"Joshua F. Cooper",
"Seung Jin Choi",
"I. Esra Buyuktahtakin"
] | In this study, we introduce an innovative deep learning framework that
employs a transformer model to address the challenges of mixed-integer
programs, specifically focusing on the Capacitated Lot Sizing Problem (CLSP).
Our approach, to our knowledge, is the first to utilize transformers to predict
the binary variables of a mixed-integer programming (MIP) problem.
Specifically, our approach harnesses the encoder decoder transformer's ability
to process sequential data, making it well-suited for predicting binary
variables indicating production setup decisions in each period of the CLSP.
This problem is inherently dynamic, and we need to handle sequential decision
making under constraints. We present an efficient algorithm in which CLSP
solutions are learned through a transformer neural network. The proposed
post-processed transformer algorithm surpasses the state-of-the-art solver,
CPLEX and Long Short-Term Memory (LSTM) in solution time, optimal gap, and
percent infeasibility over 240K benchmark CLSP instances tested. After the ML
model is trained, conducting inference on the model, including post-processing,
reduces the MIP into a linear program (LP). This transforms the ML-based
algorithm, combined with an LP solver, into a polynomial-time approximation
algorithm to solve a well-known NP-Hard problem, with almost perfect solution
quality. | [
"cs.AI",
"cs.LG",
"math.CO",
"math.OC",
"stat.ML"
] | false |
2402.13412 | 2024-02-20T22:45:00Z | Scaling physics-informed hard constraints with mixture-of-experts | [
"Nithin Chalapathi",
"Yiheng Du",
"Aditi Krishnapriyan"
] | Imposing known physical constraints, such as conservation laws, during neural
network training introduces an inductive bias that can improve accuracy,
reliability, convergence, and data efficiency for modeling physical dynamics.
While such constraints can be softly imposed via loss function penalties,
recent advancements in differentiable physics and optimization improve
performance by incorporating PDE-constrained optimization as individual layers
in neural networks. This enables a stricter adherence to physical constraints.
However, imposing hard constraints significantly increases computational and
memory costs, especially for complex dynamical systems. This is because it
requires solving an optimization problem over a large number of points in a
mesh, representing spatial and temporal discretizations, which greatly
increases the complexity of the constraint. To address this challenge, we
develop a scalable approach to enforce hard physical constraints using
Mixture-of-Experts (MoE), which can be used with any neural network
architecture. Our approach imposes the constraint over smaller decomposed
domains, each of which is solved by an "expert" through differentiable
optimization. During training, each expert independently performs a localized
backpropagation step by leveraging the implicit function theorem; the
independence of each expert allows for parallelization across multiple GPUs.
Compared to standard differentiable optimization, our scalable approach
achieves greater accuracy in the neural PDE solver setting for predicting the
dynamics of challenging non-linear systems. We also improve training stability
and require significantly less computation time during both training and
inference stages. | [
"cs.LG",
"cs.AI",
"cs.NA",
"math.NA",
"math.OC"
] | false |
2402.13219 | 2024-02-20T18:31:27Z | Analyzing Operator States and the Impact of AI-Enhanced Decision Support
in Control Rooms: A Human-in-the-Loop Specialized Reinforcement Learning
Framework for Intervention Strategies | [
"Ammar N. Abbas",
"Chidera W. Amazu",
"Joseph Mietkiewicz",
"Houda Briwa",
"Andres Alonzo Perez",
"Gabriele Baldissone",
"Micaela Demichela",
"Georgios G. Chasparis",
"John D. Kelleher",
"Maria Chiara Leva"
] | In complex industrial and chemical process control rooms, effective
decision-making is crucial for safety and efficiency. The experiments in this
paper evaluate the impact and applications of an AI-based decision support
system integrated into an improved human-machine interface, using dynamic
influence diagrams, a hidden Markov model, and deep reinforcement learning. The
enhanced support system aims to reduce operator workload, improve situational
awareness, and provide different intervention strategies to the operator
adapted to the current state of both the system and human performance. Such a
system can be particularly useful in cases of information overload when many
alarms and inputs are presented all within the same time window, or for junior
operators during training. A comprehensive cross-data analysis was conducted,
involving 47 participants and a diverse range of data sources such as
smartwatch metrics, eye-tracking data, process logs, and responses from
questionnaires. The results indicate interesting insights regarding the
effectiveness of the approach in aiding decision-making, decreasing perceived
workload, and increasing situational awareness for the scenarios considered.
Additionally, the results provide valuable insights to compare differences
between styles of information gathering when using the system by individual
participants. These findings are particularly relevant when predicting the
overall performance of the individual participant and their capacity to
successfully handle a plant upset and the alarms connected to it using process
and human-machine interaction logs in real-time. These predictions enable the
development of more effective intervention strategies. | [
"cs.AI",
"cs.HC",
"cs.LG",
"cs.MA",
"cs.SY",
"eess.SY"
] | false |