categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2407.09276
null
null
http://arxiv.org/pdf/2407.09276v1
2024-07-12T14:09:40Z
2024-07-12T14:09:40Z
H2O-Danube3 Technical Report
We present H2O-Danube3, a series of small language models consisting of H2O-Danube3-4B, trained on 6T tokens and H2O-Danube3-500M, trained on 4T tokens. Our models are pre-trained on high quality Web data consisting of primarily English tokens in three stages with different data mixes before final supervised tuning for chat version. The models exhibit highly competitive metrics across a multitude of academic, chat, and fine-tuning benchmarks. Thanks to its compact architecture, H2O-Danube3 can be efficiently run on a modern smartphone, enabling local inference and rapid processing capabilities even on mobile devices. We make all models openly available under Apache 2.0 license further democratizing LLMs to a wider audience economically.
[ "['Pascal Pfeiffer' 'Philipp Singer' 'Yauhen Babakhin' 'Gabor Fodor'\n 'Nischay Dhankhar' 'Sri Satish Ambati']" ]
null
null
2407.09297
null
null
http://arxiv.org/pdf/2407.09297v1
2024-07-12T14:30:41Z
2024-07-12T14:30:41Z
Learning Distances from Data with Normalizing Flows and Score Matching
Density-based distances (DBDs) offer an elegant solution to the problem of metric learning. By defining a Riemannian metric which increases with decreasing probability density, shortest paths naturally follow the data manifold and points are clustered according to the modes of the data. We show that existing methods to estimate Fermat distances, a particular choice of DBD, suffer from poor convergence in both low and high dimensions due to i) inaccurate density estimates and ii) reliance on graph-based paths which are increasingly rough in high dimensions. To address these issues, we propose learning the densities using a normalizing flow, a generative model with tractable density estimation, and employing a smooth relaxation method using a score model initialized from a graph-based proposal. Additionally, we introduce a dimension-adapted Fermat distance that exhibits more intuitive behavior when scaled to high dimensions and offers better numerical properties. Our work paves the way for practical use of density-based distances, especially in high-dimensional spaces.
[ "['Peter Sorrenson' 'Daniel Behrend-Uriarte' 'Christoph Schnörr'\n 'Ullrich Köthe']" ]
null
null
2407.09324
null
null
http://arxiv.org/pdf/2407.09324v1
2024-07-12T15:01:09Z
2024-07-12T15:01:09Z
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. We show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes' gradients over the network. This information complicates the adversary's attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL.
[ "['Wenrui Yu' 'Qiongxiu Li' 'Milan Lopuhaä-Zwakenberg'\n 'Mads Græsbøll Christensen' 'Richard Heusdens']" ]
null
null
2407.09336
null
null
http://arxiv.org/pdf/2407.09336v1
2024-07-12T15:13:16Z
2024-07-12T15:13:16Z
Guidelines for Augmentation Selection in Contrastive Learning for Time Series Classification
Self-supervised contrastive learning has become a key technique in deep learning, particularly in time series analysis, due to its ability to learn meaningful representations without explicit supervision. Augmentation is a critical component in contrastive learning, where different augmentations can dramatically impact performance, sometimes influencing accuracy by over 30%. However, the selection of augmentations is predominantly empirical which can be suboptimal, or grid searching that is time-consuming. In this paper, we establish a principled framework for selecting augmentations based on dataset characteristics such as trend and seasonality. Specifically, we construct 12 synthetic datasets incorporating trend, seasonality, and integration weights. We then evaluate the effectiveness of 8 different augmentations across these synthetic datasets, thereby inducing generalizable associations between time series characteristics and augmentation efficiency. Additionally, we evaluated the induced associations across 6 real-world datasets encompassing domains such as activity recognition, disease diagnosis, traffic monitoring, electricity usage, mechanical fault prognosis, and finance. These real-world datasets are diverse, covering a range from 1 to 12 channels, 2 to 10 classes, sequence lengths of 14 to 1280, and data frequencies from 250 Hz to daily intervals. The experimental results show that our proposed trend-seasonality-based augmentation recommendation algorithm can accurately identify the effective augmentations for a given time series dataset, achieving an average Recall@3 of 0.667, outperforming baselines. Our work provides guidance for studies employing contrastive learning in time series analysis, with wide-ranging applications. All the code, datasets, and analysis results will be released at https://github.com/DL4mHealth/TS-Contrastive-Augmentation-Recommendation.
[ "['Ziyu Liu' 'Azadeh Alavi' 'Minyi Li' 'Xiang Zhang']" ]
null
null
2407.09357
null
null
http://arxiv.org/pdf/2407.09357v2
2024-07-15T14:10:13Z
2024-07-12T15:32:44Z
Any-Property-Conditional Molecule Generation with Self-Criticism using Spanning Trees
Generating novel molecules is challenging, with most representations leading to generative models producing many invalid molecules. Spanning Tree-based Graph Generation (STGG) is a promising approach to ensure the generation of valid molecules, outperforming state-of-the-art SMILES and graph diffusion models for unconditional generation. In the real world, we want to be able to generate molecules conditional on one or multiple desired properties rather than unconditionally. Thus, in this work, we extend STGG to multi-property-conditional generation. Our approach, STGG+, incorporates a modern Transformer architecture, random masking of properties during training (enabling conditioning on any subset of properties and classifier-free guidance), an auxiliary property-prediction loss (allowing the model to self-criticize molecules and select the best ones), and other improvements. We show that STGG+ achieves state-of-the-art performance on in-distribution and out-of-distribution conditional generation, and reward maximization.
[ "['Alexia Jolicoeur-Martineau' 'Aristide Baratin' 'Kisoo Kwon'\n 'Boris Knyazev' 'Yan Zhang']" ]
null
null
2407.09360
null
null
http://arxiv.org/pdf/2407.09360v1
2024-07-12T15:37:05Z
2024-07-12T15:37:05Z
Novel clustered federated learning based on local loss
This paper proposes LCFL, a novel clustering metric for evaluating clients' data distributions in federated learning. LCFL aligns with federated learning requirements, accurately assessing client-to-client variations in data distribution. It offers advantages over existing clustered federated learning methods, addressing privacy concerns, improving applicability to non-convex models, and providing more accurate classification results. LCFL does not require prior knowledge of clients' data distributions. We provide a rigorous mathematical analysis, demonstrating the correctness and feasibility of our framework. Numerical experiments with neural network instances highlight the superior performance of LCFL over baselines on several clustered federated learning benchmarks.
[ "['Endong Gu' 'Yongxin Chen' 'Hao Wen' 'Xingju Cai' 'Deren Han']" ]
null
null
2407.09370
null
null
http://arxiv.org/pdf/2407.09370v1
2024-07-12T15:51:53Z
2024-07-12T15:51:53Z
Learning High-Frequency Functions Made Easy with Sinusoidal Positional Encoding
Fourier features based positional encoding (PE) is commonly used in machine learning tasks that involve learning high-frequency features from low-dimensional inputs, such as 3D view synthesis and time series regression with neural tangent kernels. Despite their effectiveness, existing PEs require manual, empirical adjustment of crucial hyperparameters, specifically the Fourier features, tailored to each unique task. Further, PEs face challenges in efficiently learning high-frequency functions, particularly in tasks with limited data. In this paper, we introduce sinusoidal PE (SPE), designed to efficiently learn adaptive frequency features closely aligned with the true underlying function. Our experiments demonstrate that SPE, without hyperparameter tuning, consistently achieves enhanced fidelity and faster training across various tasks, including 3D view synthesis, Text-to-Speech generation, and 1D regression. SPE is implemented as a direct replacement for existing PEs. Its plug-and-play nature lets numerous tasks easily adopt and benefit from SPE.
[ "['Chuanhao Sun' 'Zhihang Yuan' 'Kai Xu' 'Luo Mai' 'Siddharth N'\n 'Shuo Chen' 'Mahesh K. Marina']" ]
null
null
2407.09373
null
null
http://arxiv.org/pdf/2407.09373v1
2024-07-12T15:53:26Z
2024-07-12T15:53:26Z
Towards Personalised Patient Risk Prediction Using Temporal Hospital Data Trajectories
Quantifying a patient's health status provides clinicians with insight into patient risk, and the ability to better triage and manage resources. Early Warning Scores (EWS) are widely deployed to measure overall health status, and risk of adverse outcomes, in hospital patients. However, current EWS are limited both by their lack of personalisation and use of static observations. We propose a pipeline that groups intensive care unit patients by the trajectories of observations data throughout their stay as a basis for the development of personalised risk predictions. Feature importance is considered to provide model explainability. Using the MIMIC-IV dataset, six clusters were identified, capturing differences in disease codes, observations, lengths of admissions and outcomes. Applying the pipeline to data from just the first four hours of each ICU stay assigns the majority of patients to the same cluster as when the entire stay duration is considered. In-hospital mortality prediction models trained on individual clusters had higher F1 score performance in five of the six clusters when compared against the unclustered patient cohort. The pipeline could form the basis of a clinical decision support tool, working to improve the clinical characterisation of risk groups and the early detection of patient deterioration.
[ "['Thea Barnes' 'Enrico Werner' 'Jeffrey N. Clark' 'Raul Santos-Rodriguez']" ]
null
null
2407.09375
null
null
http://arxiv.org/pdf/2407.09375v1
2024-07-12T15:56:11Z
2024-07-12T15:56:11Z
HiPPO-Prophecy: State-Space Models can Provably Learn Dynamical Systems in Context
This work explores the in-context learning capabilities of State Space Models (SSMs) and presents, to the best of our knowledge, the first theoretical explanation of a possible underlying mechanism. We introduce a novel weight construction for SSMs, enabling them to predict the next state of any dynamical system after observing previous states without parameter fine-tuning. This is accomplished by extending the HiPPO framework to demonstrate that continuous SSMs can approximate the derivative of any input signal. Specifically, we find an explicit weight construction for continuous SSMs and provide an asymptotic error bound on the derivative approximation. The discretization of this continuous SSM subsequently yields a discrete SSM that predicts the next state. Finally, we demonstrate the effectiveness of our parameterization empirically. This work should be an initial step toward understanding how sequence models based on SSMs learn in context.
[ "['Federico Arangath Joseph' 'Kilian Haefeli' 'Noah Liniger'\n 'Caglar Gulcehre']" ]
null
null
2407.09378
null
null
http://arxiv.org/pdf/2407.09378v1
2024-07-12T15:56:33Z
2024-07-12T15:56:33Z
Graph Neural Network Causal Explanation via Neural Causal Models
Graph neural network (GNN) explainers identify the important subgraph that ensures the prediction for a given graph. Until now, almost all GNN explainers are based on association, which is prone to spurious correlations. We propose {name}, a GNN causal explainer via causal inference. Our explainer is based on the observation that a graph often consists of a causal underlying subgraph. {name} includes three main steps: 1) It builds causal structure and the corresponding structural causal model (SCM) for a graph, which enables the cause-effect calculation among nodes. 2) Directly calculating the cause-effect in real-world graphs is computationally challenging. It is then enlightened by the recent neural causal model (NCM), a special type of SCM that is trainable, and design customized NCMs for GNNs. By training these GNN NCMs, the cause-effect can be easily calculated. 3) It uncovers the subgraph that causally explains the GNN predictions via the optimized GNN-NCMs. Evaluation results on multiple synthetic and real-world graphs validate that {name} significantly outperforms existing GNN explainers in exact groundtruth explanation identification
[ "['Arman Behnam' 'Binghui Wang']" ]
null
null
2407.09381
null
null
http://arxiv.org/pdf/2407.09381v1
2024-07-12T16:03:58Z
2024-07-12T16:03:58Z
The Effectiveness of Curvature-Based Rewiring and the Role of Hyperparameters in GNNs Revisited
Message passing is the dominant paradigm in Graph Neural Networks (GNNs). The efficiency of message passing, however, can be limited by the topology of the graph. This happens when information is lost during propagation due to being oversquashed when travelling through bottlenecks. To remedy this, recent efforts have focused on graph rewiring techniques, which disconnect the input graph originating from the data and the computational graph, on which message passing is performed. A prominent approach for this is to use discrete graph curvature measures, of which several variants have been proposed, to identify and rewire around bottlenecks, facilitating information propagation. While oversquashing has been demonstrated in synthetic datasets, in this work we reevaluate the performance gains that curvature-based rewiring brings to real-world datasets. We show that in these datasets, edges selected during the rewiring process are not in line with theoretical criteria identifying bottlenecks. This implies they do not necessarily oversquash information during message passing. Subsequently, we demonstrate that SOTA accuracies on these datasets are outliers originating from sweeps of hyperparameters -- both the ones for training and dedicated ones related to the rewiring algorithm -- instead of consistent performance gains. In conclusion, our analysis nuances the effectiveness of curvature-based rewiring in real-world datasets and brings a new perspective on the methods to evaluate GNN accuracy improvements.
[ "['Floriano Tori' 'Vincent Holst' 'Vincent Ginis']" ]
null
null
2407.09387
null
null
http://arxiv.org/pdf/2407.09387v1
2024-07-12T16:07:53Z
2024-07-12T16:07:53Z
Meta-Analysis with Untrusted Data
[See paper for full abstract] Meta-analysis is a crucial tool for answering scientific questions. It is usually conducted on a relatively small amount of ``trusted'' data -- ideally from randomized, controlled trials -- which allow causal effects to be reliably estimated with minimal assumptions. We show how to answer causal questions much more precisely by making two changes. First, we incorporate untrusted data drawn from large observational databases, related scientific literature and practical experience -- without sacrificing rigor or introducing strong assumptions. Second, we train richer models capable of handling heterogeneous trials, addressing a long-standing challenge in meta-analysis. Our approach is based on conformal prediction, which fundamentally produces rigorous prediction intervals, but doesn't handle indirect observations: in meta-analysis, we observe only noisy effects due to the limited number of participants in each trial. To handle noise, we develop a simple, efficient version of fully-conformal kernel ridge regression, based on a novel condition called idiocentricity. We introduce noise-correcting terms in the residuals and analyze their interaction with a ``variance shaving'' technique. In multiple experiments on healthcare datasets, our algorithms deliver tighter, sounder intervals than traditional ones. This paper charts a new course for meta-analysis and evidence-based medicine, where heterogeneity and untrusted data are embraced for more nuanced and precise predictions.
[ "['Shiva Kaul' 'Geoffrey J. Gordon']" ]
null
null
2407.09415
null
null
http://arxiv.org/pdf/2407.09415v1
2024-07-12T16:44:03Z
2024-07-12T16:44:03Z
A Benchmark Environment for Offline Reinforcement Learning in Racing Games
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL) by eliminating the need for continuous environmental interactions. ORL exploits a dataset of pre-collected transitions and thus expands the range of application of RL to tasks in which the excessive environment queries increase training time and decrease efficiency, such as in modern AAA games. This paper introduces OfflineMania a novel environment for ORL research. It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine. The environment simulates a single-agent racing game in which the objective is to complete the track through optimal navigation. We provide a variety of datasets to assess ORL performance. These datasets, created from policies of varying ability and in different sizes, aim to offer a challenging testbed for algorithm development and evaluation. We further establish a set of baselines for a range of Online RL, ORL, and hybrid Offline to Online RL approaches using our environment.
[ "['Girolamo Macaluso' 'Alessandro Sestini' 'Andrew D. Bagdanov']" ]
null
null
2407.09427
null
null
http://arxiv.org/pdf/2407.09427v1
2024-07-12T16:54:17Z
2024-07-12T16:54:17Z
Flow-Based Generative Emulation of Grids of Stellar Evolutionary Models
We present a flow-based generative approach to emulate grids of stellar evolutionary models. By interpreting the input parameters and output properties of these models as multi-dimensional probability distributions, we train conditional normalizing flows to learn and predict the complex relationships between grid inputs and outputs in the form of conditional joint distributions. Leveraging the expressive power and versatility of these flows, we showcase their ability to emulate a variety of evolutionary tracks and isochrones across a continuous range of input parameters. In addition, we describe a simple Bayesian approach for estimating stellar parameters using these flows and demonstrate its application to asteroseismic datasets of red giants observed by the Kepler mission. By applying this approach to red giants in open clusters NGC 6791 and NGC 6819, we illustrate how large age uncertainties can arise when fitting only to global asteroseismic and spectroscopic parameters without prior information on initial helium abundances and mixing length parameter values. We also conduct inference using the flow at a large scale by determining revised estimates of masses and radii for 15,388 field red giants. These estimates show improved agreement with results from existing grid-based modelling, reveal distinct population-level features in the red clump, and suggest that the masses of Kepler red giants previously determined using the corrected asteroseismic scaling relations have been overestimated by 5-10%.
[ "['Marc Hon' 'Yaguang Li' 'Joel Ong']" ]
null
null
2407.09434
null
null
http://arxiv.org/pdf/2407.09434v1
2024-07-12T17:09:47Z
2024-07-12T17:09:47Z
A Perspective on Foundation Models for the Electric Power Grid
Foundation models (FMs) currently dominate news headlines. They employ advanced deep learning architectures to extract structural information autonomously from vast datasets through self-supervision. The resulting rich representations of complex systems and dynamics can be applied to many downstream applications. Therefore, FMs can find uses in electric power grids, challenged by the energy transition and climate change. In this paper, we call for the development of, and state why we believe in, the potential of FMs for electric grids. We highlight their strengths and weaknesses amidst the challenges of a changing grid. We argue that an FM learning from diverse grid data and topologies could unlock transformative capabilities, pioneering a new approach in leveraging AI to redefine how we manage complexity and uncertainty in the electric grid. Finally, we discuss a power grid FM concept, namely GridFM, based on graph neural networks and show how different downstream tasks benefit.
[ "['Hendrik F. Hamann' 'Thomas Brunschwiler' 'Blazhe Gjorgiev'\n 'Leonardo S. A. Martins' 'Alban Puech' 'Anna Varbella' 'Jonas Weiss'\n 'Juan Bernabe-Moreno' 'Alexandre Blondin Massé' 'Seong Choi' 'Ian Foster'\n 'Bri-Mathias Hodge' 'Rishabh Jain' 'Kibaek Kim' 'Vincent Mai'\n 'François Mirallès' 'Martin De Montigny' 'Octavio Ramos-Leaños'\n 'Hussein Suprême' 'Le Xie' 'El-Nasser S. Youssef' 'Arnaud Zinflou'\n 'Alexander J. Belvi' 'Ricardo J. Bessa' 'Bishnu Prasad Bhattari'\n 'Johannes Schmude' 'Stanislav Sobolevsky']" ]
null
null
2407.09441
null
null
http://arxiv.org/pdf/2407.09441v1
2024-07-12T17:27:43Z
2024-07-12T17:27:43Z
The $μ\mathcal{G}$ Language for Programming Graph Neural Networks
Graph neural networks form a class of deep learning architectures specifically designed to work with graph-structured data. As such, they share the inherent limitations and problems of deep learning, especially regarding the issues of explainability and trustworthiness. We propose $mumathcal{G}$, an original domain-specific language for the specification of graph neural networks that aims to overcome these issues. The language's syntax is introduced, and its meaning is rigorously defined by a denotational semantics. An equivalent characterization in the form of an operational semantics is also provided and, together with a type system, is used to prove the type soundness of $mumathcal{G}$. We show how $mumathcal{G}$ programs can be represented in a more user-friendly graphical visualization, and provide examples of its generality by showing how it can be used to define some of the most popular graph neural network models, or to develop any custom graph processing application.
[ "['Matteo Belenchia' 'Flavio Corradini' 'Michela Quadrini' 'Michele Loreti']" ]
null
null
2407.09450
null
null
http://arxiv.org/pdf/2407.09450v1
2024-07-12T17:34:03Z
2024-07-12T17:34:03Z
Human-like Episodic Memory for Infinite Context LLMs
Large language models (LLMs) have shown remarkable capabilities, but still struggle with processing extensive contexts, limiting their ability to maintain coherence and accuracy over long sequences. In contrast, the human brain excels at organising and retrieving episodic experiences across vast temporal scales, spanning a lifetime. In this work, we introduce EM-LLM, a novel approach that integrates key aspects of human episodic memory and event cognition into LLMs, enabling them to effectively handle practically infinite context lengths while maintaining computational efficiency. EM-LLM organises sequences of tokens into coherent episodic events using a combination of Bayesian surprise and graph-theoretic boundary refinement in an on-line fashion. When needed, these events are retrieved through a two-stage memory process, combining similarity-based and temporally contiguous retrieval for efficient and human-like access to relevant information. Experiments on the LongBench dataset demonstrate EM-LLM's superior performance, outperforming the state-of-the-art InfLLM model with an overall relative improvement of 4.3% across various tasks, including a 33% improvement on the PassageRetrieval task. Furthermore, our analysis reveals strong correlations between EM-LLM's event segmentation and human-perceived events, suggesting a bridge between this artificial system and its biological counterpart. This work not only advances LLM capabilities in processing extended contexts but also provides a computational framework for exploring human memory mechanisms, opening new avenues for interdisciplinary research in AI and cognitive science.
[ "['Zafeirios Fountas' 'Martin A Benfeghoul' 'Adnan Oomerjee'\n 'Fenia Christopoulou' 'Gerasimos Lampouras' 'Haitham Bou-Ammar'\n 'Jun Wang']" ]
null
null
2407.09453
null
null
http://arxiv.org/pdf/2407.09453v1
2024-07-12T17:37:49Z
2024-07-12T17:37:49Z
Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators
Nowadays, increasingly larger Deep Neural Networks (DNNs) are being developed, trained, and utilized. These networks require significant computational resources, putting a strain on both advanced and limited devices. Our solution is to implement {em weight block sparsity}, which is a structured sparsity that is friendly to hardware. By zeroing certain sections of the convolution and fully connected layers parameters of pre-trained DNN models, we can efficiently speed up the DNN's inference process. This results in a smaller memory footprint, faster communication, and fewer operations. Our work presents a vertical system that allows for the training of convolution and matrix multiplication weights to exploit 8x8 block sparsity on a single GPU within a reasonable amount of time. Compilers recognize this sparsity and use it for both data compaction and computation splitting into threads. Blocks like these take full advantage of both spatial and temporal locality, paving the way for fast vector operations and memory reuse. By using this system on a Resnet50 model, we were able to reduce the weight by half with minimal accuracy loss, resulting in a two-times faster inference speed. We will present performance estimates using accurate and complete code generation for AIE2 configuration sets (AMD Versal FPGAs) with Resnet50, Inception V3, and VGG16 to demonstrate the necessary synergy between hardware overlay designs and software stacks for compiling and executing machine learning applications.
[ "[\"Paolo D'Alberto\" 'Taehee Jeong' 'Akshai Jain' 'Shreyas Manjunath'\n 'Mrinal Sarmah' 'Samuel Hsu' 'Yaswanth Raparti' 'Nitesh Pipralia']" ]
null
null
2407.09468
null
null
http://arxiv.org/pdf/2407.09468v1
2024-07-12T17:48:36Z
2024-07-12T17:48:36Z
Beyond Euclid: An Illustrated Guide to Modern Machine Learning with Geometric, Topological, and Algebraic Structures
The enduring legacy of Euclidean geometry underpins classical machine learning, which, for decades, has been primarily developed for data lying in Euclidean space. Yet, modern machine learning increasingly encounters richly structured data that is inherently nonEuclidean. This data can exhibit intricate geometric, topological and algebraic structure: from the geometry of the curvature of space-time, to topologically complex interactions between neurons in the brain, to the algebraic transformations describing symmetries of physical systems. Extracting knowledge from such non-Euclidean data necessitates a broader mathematical perspective. Echoing the 19th-century revolutions that gave rise to non-Euclidean geometry, an emerging line of research is redefining modern machine learning with non-Euclidean structures. Its goal: generalizing classical methods to unconventional data types with geometry, topology, and algebra. In this review, we provide an accessible gateway to this fast-growing field and propose a graphical taxonomy that integrates recent advances into an intuitive unified framework. We subsequently extract insights into current challenges and highlight exciting opportunities for future development in this field.
[ "['Sophia Sanborn' 'Johan Mathe' 'Mathilde Papillon' 'Domas Buracas'\n 'Hansen J Lillemark' 'Christian Shewmake' 'Abby Bertics' 'Xavier Pennec'\n 'Nina Miolane']" ]
null
null
2407.09475
null
null
http://arxiv.org/pdf/2407.09475v1
2024-07-12T17:57:00Z
2024-07-12T17:57:00Z
Adaptive Prediction Ensemble: Improving Out-of-Distribution Generalization of Motion Forecasting
Deep learning-based trajectory prediction models for autonomous driving often struggle with generalization to out-of-distribution (OOD) scenarios, sometimes performing worse than simple rule-based models. To address this limitation, we propose a novel framework, Adaptive Prediction Ensemble (APE), which integrates deep learning and rule-based prediction experts. A learned routing function, trained concurrently with the deep learning model, dynamically selects the most reliable prediction based on the input scenario. Our experiments on large-scale datasets, including Waymo Open Motion Dataset (WOMD) and Argoverse, demonstrate improvement in zero-shot generalization across datasets. We show that our method outperforms individual prediction models and other variants, particularly in long-horizon prediction and scenarios with a high proportion of OOD data. This work highlights the potential of hybrid approaches for robust and generalizable motion prediction in autonomous driving.
[ "['Jinning Li' 'Jiachen Li' 'Sangjae Bae' 'David Isele']" ]
null
null
2407.09488
null
null
http://arxiv.org/pdf/2407.09488v1
2024-05-17T17:06:19Z
2024-05-17T17:06:19Z
Manifold Learning via Memory and Context
Given a memory with infinite capacity, can we solve the learning problem? Apparently, nature has solved this problem as evidenced by the evolution of mammalian brains. Inspired by the organizational principles underlying hippocampal-neocortical systems, we present a navigation-based approach to manifold learning using memory and context. The key insight is to navigate on the manifold and memorize the positions of each route as inductive/design bias of direct-fit-to-nature. We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning via memory (local maps) and context (global indexing). The indexing to the library of local maps within global coordinates is collected by an associative memory serving as the librarian, which mimics the coupling between the hippocampus and the neocortex. In addition to breaking from the notorious bias-variance dilemma and the curse of dimensionality, we discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems. The energy efficiency of navigation-based learning makes it suitable for hardware implementation on non-von Neumann architectures, such as the emerging in-memory computing paradigm, including spiking neural networks and memristor neural networks.
[ "['Xin Li']" ]
null
null
2407.09497
null
null
http://arxiv.org/abs/2407.09497v1
2024-06-09T18:57:23Z
2024-06-09T18:57:23Z
Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation
The proliferation of 3D representations, from explicit meshes to implicit neural fields and more, motivates the need for simulators agnostic to representation. We present a data-, mesh-, and grid-free solution for elastic simulation for any object in any geometric representation undergoing large, nonlinear deformations. We note that every standard geometric representation can be reduced to an occupancy function queried at any point in space, and we define a simulator atop this common interface. For each object, we fit a small implicit neural network encoding spatially varying weights that act as a reduced deformation basis. These weights are trained to learn physically significant motions in the object via random perturbations. Our loss ensures we find a weight-space basis that best minimizes deformation energy by stochastically evaluating elastic energies through Monte Carlo sampling of the deformation volume. At runtime, we simulate in the reduced basis and sample the deformations back to the original domain. Our experiments demonstrate the versatility, accuracy, and speed of this approach on data including signed distance functions, point clouds, neural primitives, tomography scans, radiance fields, Gaussian splats, surface meshes, and volume meshes, as well as showing a variety of material energies, contact models, and time integration schemes.
[ "['Vismay Modi' 'Nicholas Sharp' 'Or Perel' 'Shinjiro Sueda'\n 'David I. W. Levin']" ]
null
null
2407.09498
null
null
http://arxiv.org/pdf/2407.09498v1
2024-06-12T18:30:03Z
2024-06-12T18:30:03Z
OT-VP: Optimal Transport-guided Visual Prompting for Test-Time Adaptation
While Vision Transformers (ViTs) have demonstrated remarkable capabilities in learning representations, their performance is compromised when applied to unseen domains. Previous methods either engage in prompt learning during the training phase or modify model parameters at test time through entropy minimization. The former often overlooks unlabeled target data, while the latter doesn't fully address domain shifts. In this work, our approach, Optimal Transport-guided Test-Time Visual Prompting (OT-VP), handles these problems by leveraging prompt learning at test time to align the target and source domains without accessing the training process or altering pre-trained model parameters. This method involves learning a universal visual prompt for the target domain by optimizing the Optimal Transport distance. With just four prompt tokens learned, OT-VP achieves a $5.0%$ and $1.5%$ increase in averaged accuracy across single-source and multi-source settings on three benchmark datasets, which is $1.2times$ and $1.5times$ the improvement of the state-of-the-art method, respectively.
[ "['Yunbei Zhang' 'Akshay Mehra' 'Jihun Hamm']" ]
null
null
2407.09499
null
null
http://arxiv.org/pdf/2407.09499v1
2024-06-12T21:28:28Z
2024-06-12T21:28:28Z
Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences
The rapid progress in generative models has resulted in impressive leaps in generation quality, blurring the lines between synthetic and real data. Web-scale datasets are now prone to the inevitable contamination by synthetic data, directly impacting the training of future generated models. Already, some theoretical results on self-consuming generative models (a.k.a., iterative retraining) have emerged in the literature, showcasing that either model collapse or stability could be possible depending on the fraction of generated data used at each retraining step. However, in practice, synthetic data is often subject to human feedback and curated by users before being used and uploaded online. For instance, many interfaces of popular text-to-image generative models, such as Stable Diffusion or Midjourney, produce several variations of an image for a given query which can eventually be curated by the users. In this paper, we theoretically study the impact of data curation on iterated retraining of generative models and show that it can be seen as an emph{implicit preference optimization mechanism}. However, unlike standard preference optimization, the generative model does not have access to the reward function or negative samples needed for pairwise comparisons. Moreover, our study doesn't require access to the density function, only to samples. We prove that, if the data is curated according to a reward model, then the expected reward of the iterative retraining procedure is maximized. We further provide theoretical results on the stability of the retraining loop when using a positive fraction of real data at each step. Finally, we conduct illustrative experiments on both synthetic datasets and on CIFAR10 showing that such a procedure amplifies biases of the reward model.
[ "['Damien Ferbach' 'Quentin Bertrand' 'Avishek Joey Bose' 'Gauthier Gidel']" ]
null
null
2407.09504
null
null
http://arxiv.org/pdf/2407.09504v1
2024-06-14T09:47:07Z
2024-06-14T09:47:07Z
AI-Based Copyright Detection Of An Image In a Video Using Degree Of Similarity And Image Hashing
The expanse of information available over the internet makes it difficult to identify whether a specific work is a replica or a duplication of a protected work, especially if we talk about visual representations. Strategies are planned to identify the utilization of the copyrighted image in a report. Still, we want to resolve the issue of involving a copyrighted image in a video and a calculation that could recognize the degree of similarity of the copyrighted picture utilized in the video, even for the pieces of the video that are not featured a lot and in the end perform characterization errands on those edges. Machine learning (ML) and artificial intelligence (AI) are vital to address this problem. Numerous associations have been creating different calculations to screen the identification of copyrighted work. This work means concentrating on those calculations, recognizing designs inside the information, and fabricating a more reasonable model for copyrighted image classification and detection. We have used different algorithms like- Image Processing, Convolutional Neural Networks (CNN), Image hashing, etc. Keywords- Copyright, Artificial Intelligence(AI), Copyrighted Image, Convolutional Neural Network(CNN), Image processing, Degree of similarity, Image Hashing.
[ "['Ashutosh' 'Rahul Jashvantbhai Pandya']" ]
null
null
2407.09508
null
null
http://arxiv.org/pdf/2407.09508v1
2024-06-15T14:06:00Z
2024-06-15T14:06:00Z
Focused State Recognition Using EEG with Eye Movement-Assisted Annotation
With the rapid advancement in machine learning, the recognition and analysis of brain activity based on EEG and eye movement signals have attained a high level of sophistication. Utilizing deep learning models for learning EEG and eye movement features proves effective in classifying brain activities. A focused state indicates intense concentration on a task or thought. Distinguishing focused and unfocused states can be achieved through eye movement behaviors, reflecting variations in brain activities. By calculating binocular focusing point disparity in eye movement signals and integrating relevant EEG features, we propose an annotation method for focused states. The resulting comprehensive dataset, derived from raw data processed through a bio-acquisition device, includes both EEG features and focused labels annotated by eye movements. Extensive training and testing on several deep learning models, particularly the Transformer, yielded a 90.16% accuracy on the subject-dependent experiments. The validity of this approach was demonstrated, with cross-subject experiments, key frequency band and brain region analyses confirming its generalizability and providing physiological explanations.
[ "['Tian-Hua Li' 'Tian-Fang Ma' 'Dan Peng' 'Wei-Long Zheng' 'Bao-Liang Lu']" ]
null
null
2407.09514
null
null
http://arxiv.org/pdf/2407.09514v1
2024-06-18T07:02:40Z
2024-06-18T07:02:40Z
Machine Learning Based Prediction of Proton Conductivity in Metal-Organic Frameworks
Recently, metal-organic frameworks (MOFs) have demonstrated their potential as solid-state electrolytes in proton exchange membrane fuel cells. However, the number of MOFs reported to exhibit proton conductivity remains limited, and the mechanisms underlying this phenomenon are not fully elucidated, complicating the design of proton-conductive MOFs. In response, we developed a comprehensive database of proton-conductive MOFs and applied machine learning techniques to predict their proton conductivity. Our approach included the construction of both descriptor-based and transformer-based models. Notably, the transformer-based transfer learning (Freeze) model performed the best with a mean absolute error (MAE) of 0.91, suggesting that the proton conductivity of MOFs can be estimated within one order of magnitude using this model. Additionally, we employed feature importance and principal component analysis to explore the factors influencing proton conductivity. The insights gained from our database and machine learning model are expected to facilitate the targeted design of proton-conductive MOFs.
[ "['Seunghee Han' 'Byeong Gwan Lee' 'Dae Woon Lim' 'Jihan Kim']" ]
null
null
2407.09515
null
null
http://arxiv.org/pdf/2407.09515v1
2024-06-18T19:36:18Z
2024-06-18T19:36:18Z
Rethinking Knee Osteoarthritis Severity Grading: A Few Shot Self-Supervised Contrastive Learning Approach
Knee Osteoarthritis (OA) is a debilitating disease affecting over 250 million people worldwide. Currently, radiologists grade the severity of OA on an ordinal scale from zero to four using the Kellgren-Lawrence (KL) system. Recent studies have raised concern in relation to the subjectivity of the KL grading system, highlighting the requirement for an automated system, while also indicating that five ordinal classes may not be the most appropriate approach for assessing OA severity. This work presents preliminary results of an automated system with a continuous grading scale. This system, namely SS-FewSOME, uses self-supervised pre-training to learn robust representations of the features of healthy knee X-rays. It then assesses the OA severity by the X-rays' distance to the normal representation space. SS-FewSOME initially trains on only 'few' examples of healthy knee X-rays, thus reducing the barriers to clinical implementation by eliminating the need for large training sets and costly expert annotations that existing automated systems require. The work reports promising initial results, obtaining a positive Spearman Rank Correlation Coefficient of 0.43, having had access to only 30 ground truth labels at training time.
[ "['Niamh Belton' 'Misgina Tsighe Hagos' 'Aonghus Lawlor'\n 'Kathleen M. Curran']" ]
null
null
2407.09522
null
null
http://arxiv.org/pdf/2407.09522v1
2024-06-23T06:58:55Z
2024-06-23T06:58:55Z
UQE: A Query Engine for Unstructured Databases
Analytics on structured data is a mature field with many successful methods. However, most real world data exists in unstructured form, such as images and conversations. We investigate the potential of Large Language Models (LLMs) to enable unstructured data analytics. In particular, we propose a new Universal Query Engine (UQE) that directly interrogates and draws insights from unstructured data collections. This engine accepts queries in a Universal Query Language (UQL), a dialect of SQL that provides full natural language flexibility in specifying conditions and operators. The new engine leverages the ability of LLMs to conduct analysis of unstructured data, while also allowing us to exploit advances in sampling and optimization techniques to achieve efficient and accurate query execution. In addition, we borrow techniques from classical compiler theory to better orchestrate the workflow between sampling methods and foundation model calls. We demonstrate the efficiency of UQE on data analytics across different modalities, including images, dialogs and reviews, across a range of useful query types, including conditional aggregation, semantic retrieval and abstraction aggregation.
[ "['Hanjun Dai' 'Bethany Yixin Wang' 'Xingchen Wan' 'Bo Dai' 'Sherry Yang'\n 'Azade Nova' 'Pengcheng Yin' 'Phitchaya Mangpo Phothilimthana'\n 'Charles Sutton' 'Dale Schuurmans']" ]
null
null
2407.09523
null
null
http://arxiv.org/pdf/2407.09523v1
2024-06-23T09:49:41Z
2024-06-23T09:49:41Z
MuseCL: Predicting Urban Socioeconomic Indicators via Multi-Semantic Contrastive Learning
Predicting socioeconomic indicators within urban regions is crucial for fostering inclusivity, resilience, and sustainability in cities and human settlements. While pioneering studies have attempted to leverage multi-modal data for socioeconomic prediction, jointly exploring their underlying semantics remains a significant challenge. To address the gap, this paper introduces a Multi-Semantic Contrastive Learning (MuseCL) framework for fine-grained urban region profiling and socioeconomic prediction. Within this framework, we initiate the process by constructing contrastive sample pairs for street view and remote sensing images, capitalizing on the similarities in human mobility and Point of Interest (POI) distribution to derive semantic features from the visual modality. Additionally, we extract semantic insights from POI texts embedded within these regions, employing a pre-trained text encoder. To merge the acquired visual and textual features, we devise an innovative cross-modality-based attentional fusion module, which leverages a contrastive mechanism for integration. Experimental results across multiple cities and indicators consistently highlight the superiority of MuseCL, demonstrating an average improvement of 10% in $R^2$ compared to various competitive baseline models. The code of this work is publicly available at https://github.com/XixianYong/MuseCL.
[ "['Xixian Yong' 'Xiao Zhou']" ]
null
null
2407.09524
null
null
http://arxiv.org/abs/2407.09524v1
2024-06-24T13:31:08Z
2024-06-24T13:31:08Z
Geometric Understanding of Discriminability and Transferability for Visual Domain Adaptation
To overcome the restriction of identical distribution assumption, invariant representation learning for unsupervised domain adaptation (UDA) has made significant advances in computer vision and pattern recognition communities. In UDA scenario, the training and test data belong to different domains while the task model is learned to be invariant. Recently, empirical connections between transferability and discriminability have received increasing attention, which is the key to understanding the invariant representations. However, theoretical study of these abilities and in-depth analysis of the learned feature structures are unexplored yet. In this work, we systematically analyze the essentials of transferability and discriminability from the geometric perspective. Our theoretical results provide insights into understanding the co-regularization relation and prove the possibility of learning these abilities. From methodology aspect, the abilities are formulated as geometric properties between domain/cluster subspaces (i.e., orthogonality and equivalence) and characterized as the relation between the norms/ranks of multiple matrices. Two optimization-friendly learning principles are derived, which also ensure some intuitive explanations. Moreover, a feasible range for the co-regularization parameters is deduced to balance the learning of geometric structures. Based on the theoretical results, a geometry-oriented model is proposed for enhancing the transferability and discriminability via nuclear norm optimization. Extensive experiment results validate the effectiveness of the proposed model in empirical applications, and verify that the geometric abilities can be sufficiently learned in the derived feasible range.
[ "['You-Wei Luo' 'Chuan-Xian Ren' 'Xiao-Lin Xu' 'Qingshan Liu']" ]
null
null
2407.09525
null
null
http://arxiv.org/pdf/2407.09525v1
2024-06-24T14:58:49Z
2024-06-24T14:58:49Z
A Deep Learning Framework for Three Dimensional Shape Reconstruction from Phaseless Acoustic Scattering Far-field Data
The inverse scattering problem is of critical importance in a number of fields, including medical imaging, sonar, sensing, non-destructive evaluation, and several others. The problem of interest can vary from detecting the shape to the constitutive properties of the obstacle. The challenge in both is that this problem is ill-posed, more so when there is limited information. That said, significant effort has been expended over the years in developing solutions to this problem. Here, we use a different approach, one that is founded on data. Specifically, we develop a deep learning framework for shape reconstruction using limited information with single incident wave, single frequency, and phase-less far-field data. This is done by (a) using a compact probabilistic shape latent space, learned by a 3D variational auto-encoder, and (b) a convolutional neural network trained to map the acoustic scattering information to this shape representation. The proposed framework is evaluated on a synthetic 3D particle dataset, as well as ShapeNet, a popular 3D shape recognition dataset. As demonstrated via a number of results, the proposed method is able to produce accurate reconstructions for large batches of complex scatterer shapes (such as airplanes and automobiles), despite the significant variation present within the data.
[ "['Doga Dikbayir' 'Abdel Alsnayyan' 'Vishnu Naresh Boddeti'\n 'Balasubramaniam Shanker' 'Hasan Metin Aktulga']" ]
null
null
2407.09527
null
null
http://arxiv.org/pdf/2407.09527v1
2024-06-24T20:55:36Z
2024-06-24T20:55:36Z
BitNet b1.58 Reloaded: State-of-the-art Performance Also on Smaller Networks
Recently proposed methods for 1-bit and 1.58-bit quantization aware training investigate the performance and behavior of these methods in the context of large language models, finding state-of-the-art performance for models with more than 3B parameters. In this work, we investigate 1.58-bit quantization for small language and vision models ranging from 100K to 48M parameters. We introduce a variant of BitNet b1.58, which allows to rely on the median rather than the mean in the quantization process. Through extensive experiments we investigate the performance of 1.58-bit models obtained through quantization aware training. We further investigate the robustness of 1.58-bit quantization-aware training to changes in the learning rate and regularization through weight decay, finding different patterns for small language and vision models than previously reported for large language models. Our results showcase that 1.58-bit quantization-aware training provides state-of-the-art performance for small language models when doubling hidden layer sizes and reaches or even surpasses state-of-the-art performance for small vision models of identical size. Ultimately, we demonstrate that 1.58-bit quantization-aware training is a viable and promising approach also for training smaller deep learning networks, facilitating deployment of such models in low-resource use-cases and encouraging future research.
[ "['Jacob Nielsen' 'Peter Schneider-Kamp']" ]
null
null
2407.09537
null
null
http://arxiv.org/pdf/2407.09537v1
2024-06-26T09:01:35Z
2024-06-26T09:01:35Z
ViPro: Enabling and Controlling Video Prediction for Complex Dynamical Scenarios using Procedural Knowledge
We propose a novel architecture design for video prediction in order to utilize procedural domain knowledge directly as part of the computational graph of data-driven models. On the basis of new challenging scenarios we show that state-of-the-art video predictors struggle in complex dynamical settings, and highlight that the introduction of prior process knowledge makes their learning problem feasible. Our approach results in the learning of a symbolically addressable interface between data-driven aspects in the model and our dedicated procedural knowledge module, which we utilize in downstream control tasks.
[ "['Patrick Takenaka' 'Johannes Maucher' 'Marco F. Huber']" ]
null
null
2407.09539
null
null
http://arxiv.org/pdf/2407.09539v1
2024-06-26T10:20:01Z
2024-06-26T10:20:01Z
Classification of Inkjet Printers based on Droplet Statistics
Knowing the printer model used to print a given document may provide a crucial lead towards identifying counterfeits or conversely verifying the validity of a real document. Inkjet printers produce probabilistic droplet patterns that appear to be distinct for each printer model and as such we investigate the utilization of droplet characteristics including frequency domain features extracted from printed document scans for the classification of the underlying printer model. We collect and publish a dataset of high resolution document scans and show that our extracted features are informative enough to enable a neural network to distinguish not only the printer manufacturer, but also individual printer models.
[ "['Patrick Takenaka' 'Manuel Eberhardinger' 'Daniel Grießhaber'\n 'Johannes Maucher']" ]
null
null
2407.09540
null
null
http://arxiv.org/pdf/2407.09540v1
2024-06-26T11:05:46Z
2024-06-26T11:05:46Z
Prompting Whole Slide Image Based Genetic Biomarker Prediction
Prediction of genetic biomarkers, e.g., microsatellite instability and BRAF in colorectal cancer is crucial for clinical decision making. In this paper, we propose a whole slide image (WSI) based genetic biomarker prediction method via prompting techniques. Our work aims at addressing the following challenges: (1) extracting foreground instances related to genetic biomarkers from gigapixel WSIs, and (2) the interaction among the fine-grained pathological components in WSIs.Specifically, we leverage large language models to generate medical prompts that serve as prior knowledge in extracting instances associated with genetic biomarkers. We adopt a coarse-to-fine approach to mine biomarker information within the tumor microenvironment. This involves extracting instances related to genetic biomarkers using coarse medical prior knowledge, grouping pathology instances into fine-grained pathological components and mining their interactions. Experimental results on two colorectal cancer datasets show the superiority of our method, achieving 91.49% in AUC for MSI classification. The analysis further shows the clinical interpretability of our method. Code is publicly available at https://github.com/DeepMed-Lab-ECNU/PromptBio.
[ "['Ling Zhang' 'Boxiang Yun' 'Xingran Xie' 'Qingli Li' 'Xinxing Li'\n 'Yan Wang']" ]
null
null
2407.09545
null
null
http://arxiv.org/pdf/2407.09545v1
2024-06-27T08:27:08Z
2024-06-27T08:27:08Z
Designing Chaotic Attractors: A Semi-supervised Approach
Chaotic dynamics are ubiquitous in nature and useful in engineering, but their geometric design can be challenging. Here, we propose a method using reservoir computing to generate chaos with a desired shape by providing a periodic orbit as a template, called a skeleton. We exploit a bifurcation of the reservoir to intentionally induce unsuccessful training of the skeleton, revealing inherent chaos. The emergence of this untrained attractor, resulting from the interaction between the skeleton and the reservoir's intrinsic dynamics, offers a novel semi-supervised framework for designing chaos.
[ "['Tempei Kabayama' 'Yasuo Kuniyoshi' 'Kazuyuki Aihara' 'Kohei Nakajima']" ]
null
null
2407.09550
null
null
http://arxiv.org/pdf/2407.09550v1
2024-06-27T14:43:06Z
2024-06-27T14:43:06Z
CAPM: Fast and Robust Verification on Maxpool-based CNN via Dual Network
This study uses CAPM (Convex Adversarial Polytope for Maxpool-based CNN) to improve the verified bound for general purpose maxpool-based convolutional neural networks (CNNs) under bounded norm adversarial perturbations. The maxpool function is decomposed as a series of ReLU functions to extend the convex relaxation technique to maxpool functions, by which the verified bound can be efficiently computed through a dual network. The experimental results demonstrate that this technique allows the state-of-the-art verification precision for maxpool-based CNNs and involves a much lower computational cost than current verification methods, such as DeepZ, DeepPoly and PRIMA. This method is also applicable to large-scale CNNs, which previous studies show to be often computationally prohibitively expensive. Under certain circumstances, CAPM is 40-times, 20-times or twice as fast and give a significantly higher verification bound (CAPM 98% vs. PRIMA 76%/DeepPoly 73%/DeepZ 8%) as compared to PRIMA/DeepPoly/DeepZ. Furthermore, we additionally present the time complexity of our algorithm as $O(W^2NK)$, where $W$ is the maximum width of the neural network, $N$ is the number of neurons, and $K$ is the size of the maxpool layer's kernel.
[ "['Jia-Hau Bai' 'Chi-Ting Liu' 'Yu Wang' 'Fu-Chieh Chang' 'Pei-Yuan Wu']" ]
null
null
2407.09551
null
null
http://arxiv.org/pdf/2407.09551v1
2024-06-27T17:18:58Z
2024-06-27T17:18:58Z
Diminishing Stereotype Bias in Image Generation Model using Reinforcemenlent Learning Feedback
This study addresses gender bias in image generation models using Reinforcement Learning from Artificial Intelligence Feedback (RLAIF) with a novel Denoising Diffusion Policy Optimization (DDPO) pipeline. By employing a pretrained stable diffusion model and a highly accurate gender classification Transformer, the research introduces two reward functions: Rshift for shifting gender imbalances, and Rbalance for achieving and maintaining gender balance. Experiments demonstrate the effectiveness of this approach in mitigating bias without compromising image quality or requiring additional data or prompt modifications. While focusing on gender bias, this work establishes a foundation for addressing various forms of bias in AI systems, emphasizing the need for responsible AI development. Future research directions include extending the methodology to other bias types, enhancing the RLAIF pipeline's robustness, and exploring multi-prompt fine-tuning to further advance fairness and inclusivity in AI.
[ "['Xin Chen' 'Virgile Foussereau']" ]
null
null
2407.09557
null
null
http://arxiv.org/pdf/2407.09557v1
2024-06-29T20:56:58Z
2024-06-29T20:56:58Z
Deep Reinforcement Learning Strategies in Finance: Insights into Asset Holding, Trading Behavior, and Purchase Diversity
Recent deep reinforcement learning (DRL) methods in finance show promising outcomes. However, there is limited research examining the behavior of these DRL algorithms. This paper aims to investigate their tendencies towards holding or trading financial assets as well as purchase diversity. By analyzing their trading behaviors, we provide insights into the decision-making processes of DRL models in finance applications. Our findings reveal that each DRL algorithm exhibits unique trading patterns and strategies, with A2C emerging as the top performer in terms of cumulative rewards. While PPO and SAC engage in significant trades with a limited number of stocks, DDPG and TD3 adopt a more balanced approach. Furthermore, SAC and PPO tend to hold positions for shorter durations, whereas DDPG, A2C, and TD3 display a propensity to remain stationary for extended periods.
[ "['Alireza Mohammadshafie' 'Akram Mirzaeinia' 'Haseebullah Jumakhan'\n 'Amir Mirzaeinia']" ]
null
null
2407.09571
null
null
http://arxiv.org/pdf/2407.09571v1
2024-07-10T22:49:45Z
2024-07-10T22:49:45Z
ImPORTance -- Machine Learning-Driven Analysis of Global Port Significance and Network Dynamics for Improved Operational Efficiency
Seaports play a crucial role in the global economy, and researchers have sought to understand their significance through various studies. In this paper, we aim to explore the common characteristics shared by important ports by analyzing the network of connections formed by vessel movement among them. To accomplish this task, we adopt a bottom-up network construction approach that combines three years' worth of AIS (Automatic Identification System) data from around the world, constructing a Ports Network that represents the connections between different ports. Through such representation, we use machine learning to measure the relative significance of different port features. Our model examined such features and revealed that geographical characteristics and the depth of the port are indicators of a port's significance to the Ports Network. Accordingly, this study employs a data-driven approach and utilizes machine learning to provide a comprehensive understanding of the factors contributing to ports' importance. The outcomes of our work are aimed to inform decision-making processes related to port development, resource allocation, and infrastructure planning in the industry.
[ "['Emanuele Carlini' 'Domenico Di Gangi' 'Vinicius Monteiro de Lira'\n 'Hanna Kavalionak' 'Gabriel Spadon' 'Amilcar Soares']" ]
null
null
2407.09573
null
null
http://arxiv.org/pdf/2407.09573v1
2024-07-11T16:38:40Z
2024-07-11T16:38:40Z
Have We Reached AGI? Comparing ChatGPT, Claude, and Gemini to Human Literacy and Education Benchmarks
Recent advancements in AI, particularly in large language models (LLMs) like ChatGPT, Claude, and Gemini, have prompted questions about their proximity to Artificial General Intelligence (AGI). This study compares LLM performance on educational benchmarks with Americans' average educational attainment and literacy levels, using data from the U.S. Census Bureau and technical reports. Results show that LLMs significantly outperform human benchmarks in tasks such as undergraduate knowledge and advanced reading comprehension, indicating substantial progress toward AGI. However, true AGI requires broader cognitive assessments. The study highlights the implications for AI development, education, and societal impact, emphasizing the need for ongoing research and ethical considerations.
[ "['Mfon Akpan']" ]
null
null
2407.09577
null
null
http://arxiv.org/pdf/2407.09577v1
2024-07-12T00:37:55Z
2024-07-12T00:37:55Z
Flash normalization: fast RMSNorm for LLMs
RMSNorm is used by many LLMs such as Llama, Mistral, and OpenELM. This paper details FlashNorm, which is an exact but faster implementation of RMSNorm followed by linear layers. See https://huggingface.co/open-machine/FlashNorm for code and more transformer tricks.
[ "['Nils Graef' 'Matthew Clapp' 'Andrew Wasielewski']" ]
null
null
2407.09578
null
null
http://arxiv.org/pdf/2407.09578v1
2024-07-12T01:50:07Z
2024-07-12T01:50:07Z
Unsupervised Anomaly Detection Using Diffusion Trend Analysis
Conventional anomaly detection techniques based on reconstruction via denoising diffusion model are widely used due to their ability to identify anomaly locations and shapes with high performance. However, there is a limitation in determining appropriate noise parameters that can degrade anomalies while preserving normal characteristics. Also, due to the volatility of the diffusion model, normal regions can fluctuate considerably during reconstruction, resulting in false detection. In this paper, we propose a method to detect anomalies by analysis of reconstruction trend depending on the degree of degradation, effectively solving the both problems of existing methods. The proposed method is validated on an open dataset for industrial anomaly detection, improving the performance of existing methods on a number of evaluation criteria. With the ease of combination with existing anomaly detection methods, it provides a tradeoff between computational cost and performance, allowing it high application potential in manufacturing industry.
[ "['Eunwoo Kim' 'Un Yang' 'Cheol Lae Roh' 'Stefano Ermon']" ]
null
null
2407.09590
null
null
http://arxiv.org/pdf/2407.09590v1
2024-07-12T17:25:02Z
2024-07-12T17:25:02Z
Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts
By increasing model parameters but activating them sparsely when performing a task, the use of Mixture-of-Experts (MoE) architecture significantly improves the performance of Large Language Models (LLMs) without increasing the inference cost. However, the memory consumption due to the growing number of experts presents a challenge to the deployment of these models in many real world settings. Our empirical study reveals that some experts encode redundant knowledge during pre-training. We thus propose a method of grouping and pruning similar experts to improve model's parameter efficiency. We validate the effectiveness of our method by pruning two state-of-the-art MoE models, Mixtral-8x7B and Mixtral-8x22B. Evaluation shows that our method outperforms other model pruning methods on a range of natural language tasks. To facilitate future research, we will release our code and the pruned MoE models.
[ "['Zeliang Zhang' 'Xiaodong Liu' 'Hao Cheng' 'Chenliang Xu' 'Jianfeng Gao']" ]
null
null
2407.09602
null
null
http://arxiv.org/pdf/2407.09602v1
2024-07-12T18:00:02Z
2024-07-12T18:00:02Z
Real-time gravitational-wave inference for binary neutron stars using machine learning
Mergers of binary neutron stars (BNSs) emit signals in both the gravitational-wave (GW) and electromagnetic (EM) spectra. Famously, the 2017 multi-messenger observation of GW170817 led to scientific discoveries across cosmology, nuclear physics, and gravity. Central to these results were the sky localization and distance obtained from GW data, which, in the case of GW170817, helped to identify the associated EM transient, AT 2017gfo, 11 hours after the GW signal. Fast analysis of GW data is critical for directing time-sensitive EM observations; however, due to challenges arising from the length and complexity of signals, it is often necessary to make approximations that sacrifice accuracy. Here, we develop a machine learning approach that performs complete BNS inference in just one second without making any such approximations. This is enabled by a new method for explicit integration of physical domain knowledge into neural networks. Our approach enhances multi-messenger observations by providing (i) accurate localization even before the merger; (ii) improved localization precision by $sim30%$ compared to approximate low-latency methods; and (iii) detailed information on luminosity distance, inclination, and masses, which can be used to prioritize expensive telescope time. Additionally, the flexibility and reduced cost of our method open new opportunities for equation-of-state and waveform systematics studies. Finally, we demonstrate that our method scales to extremely long signals, up to an hour in length, thus serving as a blueprint for data analysis for next-generation ground- and space-based detectors.
[ "['Maximilian Dax' 'Stephen R. Green' 'Jonathan Gair' 'Nihar Gupte'\n 'Michael Pürrer' 'Vivien Raymond' 'Jonas Wildberger' 'Jakob H. Macke'\n 'Alessandra Buonanno' 'Bernhard Schölkopf']" ]
null
null
2407.09618
null
null
http://arxiv.org/pdf/2407.09618v1
2024-07-12T18:04:32Z
2024-07-12T18:04:32Z
The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Homophily principle, ie{} nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks. However, recent work has identified a non-trivial set of datasets where GNN's performance compared to the NN's is not satisfactory. Heterophily, i.e. low homophily, has been considered the main cause of this empirical observation. People have begun to revisit and re-evaluate most existing graph models, including graph transformer and its variants, in the heterophily scenario across various kinds of graphs, e.g. heterogeneous graphs, temporal graphs and hypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue. In this survey, we provide a comprehensive review of the latest progress on heterophilic graph learning, including an extensive summary of benchmark datasets and evaluation of homophily metrics on synthetic graphs, meticulous classification of the most updated supervised and unsupervised learning methods, thorough digestion of the theoretical analysis on homophily/heterophily, and broad exploration of the heterophily-related applications. Notably, through detailed experiments, we are the first to categorize benchmark heterophilic datasets into three sub-categories: malignant, benign and ambiguous heterophily. Malignant and ambiguous datasets are identified as the real challenging datasets to test the effectiveness of new models on the heterophily challenge. Finally, we propose several challenges and future directions for heterophilic graph representation learning.
[ "['Sitao Luan' 'Chenqing Hua' 'Qincheng Lu' 'Liheng Ma' 'Lirong Wu'\n 'Xinyu Wang' 'Minkai Xu' 'Xiao-Wen Chang' 'Doina Precup' 'Rex Ying'\n 'Stan Z. Li' 'Jian Tang' 'Guy Wolf' 'Stefanie Jegelka']" ]
null
null
2407.09628
null
null
http://arxiv.org/pdf/2407.09628v1
2024-07-12T18:29:48Z
2024-07-12T18:29:48Z
Accelerating Electron Dynamics Simulations through Machine Learned Time Propagators
Time-dependent density functional theory (TDDFT) is a widely used method to investigate electron dynamics under various external perturbations such as laser fields. In this work, we present a novel approach to accelerate real time TDDFT based electron dynamics simulations using autoregressive neural operators as time-propagators for the electron density. By leveraging physics-informed constraints and high-resolution training data, our model achieves superior accuracy and computational speed compared to traditional numerical solvers. We demonstrate the effectiveness of our model on a class of one-dimensional diatomic molecules. This method has potential in enabling real-time, on-the-fly modeling of laser-irradiated molecules and materials with varying experimental parameters.
[ "['Karan Shah' 'Attila Cangi']" ]
null
null
2407.09632
null
null
http://arxiv.org/pdf/2407.09632v1
2024-07-12T18:41:07Z
2024-07-12T18:41:07Z
Granger Causality in Extremes
We introduce a rigorous mathematical framework for Granger causality in extremes, designed to identify causal links from extreme events in time series. Granger causality plays a pivotal role in uncovering directional relationships among time-varying variables. While this notion gains heightened importance during extreme and highly volatile periods, state-of-the-art methods primarily focus on causality within the body of the distribution, often overlooking causal mechanisms that manifest only during extreme events. Our framework is designed to infer causality mainly from extreme events by leveraging the causal tail coefficient. We establish equivalences between causality in extremes and other causal concepts, including (classical) Granger causality, Sims causality, and structural causality. We prove other key properties of Granger causality in extremes and show that the framework is especially helpful under the presence of hidden confounders. We also propose a novel inference method for detecting the presence of Granger causality in extremes from data. Our method is model-free, can handle non-linear and high-dimensional time series, outperforms current state-of-the-art methods in all considered setups, both in performance and speed, and was found to uncover coherent effects when applied to financial and extreme weather observations.
[ "['Juraj Bodik' 'Olivier Pasche']" ]
null
null
2407.09642
null
null
http://arxiv.org/pdf/2407.09642v1
2024-07-12T19:03:42Z
2024-07-12T19:03:42Z
Seq-to-Final: A Benchmark for Tuning from Sequential Distributions to a Final Time Point
Distribution shift over time occurs in many settings. Leveraging historical data is necessary to learn a model for the last time point when limited data is available in the final period, yet few methods have been developed specifically for this purpose. In this work, we construct a benchmark with different sequences of synthetic shifts to evaluate the effectiveness of 3 classes of methods that 1) learn from all data without adapting to the final period, 2) learn from historical data with no regard to the sequential nature and then adapt to the final period, and 3) leverage the sequential nature of historical data when tailoring a model to the final period. We call this benchmark Seq-to-Final to highlight the focus on using a sequence of time periods to learn a model for the final time point. Our synthetic benchmark allows users to construct sequences with different types of shift and compare different methods. We focus on image classification tasks using CIFAR-10 and CIFAR-100 as the base images for the synthetic sequences. We also evaluate the same methods on the Portraits dataset to explore the relevance to real-world shifts over time. Finally, we create a visualization to contrast the initializations and updates from different methods at the final time step. Our results suggest that, for the sequences in our benchmark, methods that disregard the sequential structure and adapt to the final time point tend to perform well. The approaches we evaluate that leverage the sequential nature do not offer any improvement. We hope that this benchmark will inspire the development of new algorithms that are better at leveraging sequential historical data or a deeper understanding of why methods that disregard the sequential nature are able to perform well.
[ "['Christina X Ji' 'Ahmed M Alaa' 'David Sontag']" ]
null
null
2407.09645
null
null
http://arxiv.org/pdf/2407.09645v1
2024-07-12T19:04:39Z
2024-07-12T19:04:39Z
Hamilton-Jacobi Reachability in Reinforcement Learning: A Survey
Recent literature has proposed approaches that learn control policies with high performance while maintaining safety guarantees. Synthesizing Hamilton-Jacobi (HJ) reachable sets has become an effective tool for verifying safety and supervising the training of reinforcement learning-based control policies for complex, high-dimensional systems. Previously, HJ reachability was limited to verifying low-dimensional dynamical systems -- this is because the computational complexity of the dynamic programming approach it relied on grows exponentially with the number of system states. To address this limitation, in recent years, there have been methods that compute the reachability value function simultaneously with learning control policies to scale HJ reachability analysis while still maintaining a reliable estimate of the true reachable set. These HJ reachability approximations are used to improve the safety, and even reward performance, of learned control policies and can solve challenging tasks such as those with dynamic obstacles and/or with lidar-based or vision-based observations. In this survey paper, we review the recent developments in the field of HJ reachability estimation in reinforcement learning that would provide a foundational basis for further research into reliability in high-dimensional systems.
[ "['Milan Ganai' 'Sicun Gao' 'Sylvia Herbert']" ]
null
null
2407.09658
null
null
http://arxiv.org/pdf/2407.09658v1
2024-07-12T19:38:42Z
2024-07-12T19:38:42Z
BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning
Federated learning, while being a promising approach for collaborative model training, is susceptible to poisoning attacks due to its decentralized nature. Backdoor attacks, in particular, have shown remarkable stealthiness, as they selectively compromise predictions for inputs containing triggers. Previous endeavors to detect and mitigate such attacks are based on the Independent and Identically Distributed (IID) data assumption where benign model updates exhibit high-level similarity in multiple feature spaces due to IID data. Thus, outliers are detected as backdoor attacks. Nevertheless, non-IID data presents substantial challenges in backdoor attack detection, as the data variety introduces variance among benign models, making outlier detection-based mechanisms less effective. We propose a novel distribution-aware anomaly detection mechanism, BoBa, to address this problem. In order to differentiate outliers arising from data variety versus backdoor attack, we propose to break down the problem into two steps: clustering clients utilizing their data distribution followed by a voting-based detection. Based on the intuition that clustering and subsequent backdoor detection can drastically benefit from knowing client data distributions, we propose a novel data distribution inference mechanism. To improve detection robustness, we introduce an overlapping clustering method, where each client is associated with multiple clusters, ensuring that the trustworthiness of a model update is assessed collectively by multiple clusters rather than a single cluster. Through extensive evaluations, we demonstrate that BoBa can reduce the attack success rate to lower than 0.001 while maintaining high main task accuracy across various attack strategies and experimental settings.
[ "['Ning Wang' 'Shanghao Shi' 'Yang Xiao' 'Yimin Chen' 'Y. Thomas Hou'\n 'Wenjing Lou']" ]
null
null
2407.09679
null
null
http://arxiv.org/pdf/2407.09679v1
2024-07-12T20:19:41Z
2024-07-12T20:19:41Z
Physics-Informed Learning of Characteristic Trajectories for Smoke Reconstruction
We delve into the physics-informed neural reconstruction of smoke and obstacles through sparse-view RGB videos, tackling challenges arising from limited observation of complex dynamics. Existing physics-informed neural networks often emphasize short-term physics constraints, leaving the proper preservation of long-term conservation less explored. We introduce Neural Characteristic Trajectory Fields, a novel representation utilizing Eulerian neural fields to implicitly model Lagrangian fluid trajectories. This topology-free, auto-differentiable representation facilitates efficient flow map calculations between arbitrary frames as well as efficient velocity extraction via auto-differentiation. Consequently, it enables end-to-end supervision covering long-term conservation and short-term physics priors. Building on the representation, we propose physics-informed trajectory learning and integration into NeRF-based scene reconstruction. We enable advanced obstacle handling through self-supervised scene decomposition and seamless integrated boundary constraints. Our results showcase the ability to overcome challenges like occlusion uncertainty, density-color ambiguity, and static-dynamic entanglements. Code and sample tests are at url{https://github.com/19reborn/PICT_smoke}.
[ "['Yiming Wang' 'Siyu Tang' 'Mengyu Chu']" ]
null
null
2407.09685
null
null
http://arxiv.org/pdf/2407.09685v1
2024-07-12T20:55:59Z
2024-07-12T20:55:59Z
Accelerating the inference of string generation-based chemical reaction models for industrial applications
Template-free SMILES-to-SMILES translation models for reaction prediction and single-step retrosynthesis are of interest for industrial applications in computer-aided synthesis planning systems due to their state-of-the-art accuracy. However, they suffer from slow inference speed. We present a method to accelerate inference in autoregressive SMILES generators through speculative decoding by copying query string subsequences into target strings in the right places. We apply our method to the molecular transformer implemented in Pytorch Lightning and achieve over 3X faster inference in reaction prediction and single-step retrosynthesis, with no loss in accuracy.
[ "['Mikhail Andronov' 'Natalia Andronova' 'Michael Wand'\n 'Jürgen Schmidhuber' 'Djork-Arnè Clevert']" ]
null
null
2407.09690
null
null
http://arxiv.org/pdf/2407.09690v1
2024-07-12T21:20:44Z
2024-07-12T21:20:44Z
Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses
We revisit the problem of federated learning (FL) with private data from people who do not trust the server or other silos/clients. In this context, every silo (e.g. hospital) has data from several people (e.g. patients) and needs to protect the privacy of each person's data (e.g. health records), even if the server and/or other silos try to uncover this data. Inter-Silo Record-Level Differential Privacy (ISRL-DP) prevents each silo's data from being leaked, by requiring that silo i's communications satisfy item-level differential privacy. Prior work arXiv:2203.06735 characterized the optimal excess risk bounds for ISRL-DP algorithms with homogeneous (i.i.d.) silo data and convex loss functions. However, two important questions were left open: (1) Can the same excess risk bounds be achieved with heterogeneous (non-i.i.d.) silo data? (2) Can the optimal risk bounds be achieved with fewer communication rounds? In this paper, we give positive answers to both questions. We provide novel ISRL-DP FL algorithms that achieve the optimal excess risk bounds in the presence of heterogeneous silo data. Moreover, our algorithms are more communication-efficient than the prior state-of-the-art. For smooth loss functions, our algorithm achieves the optimal excess risk bound and has communication complexity that matches the non-private lower bound. Additionally, our algorithms are more computationally efficient than the previous state-of-the-art.
[ "['Changyu Gao' 'Andrew Lowy' 'Xingyu Zhou' 'Stephen J. Wright']" ]
null
null
2407.09691
null
null
http://arxiv.org/pdf/2407.09691v1
2024-07-12T21:20:57Z
2024-07-12T21:20:57Z
EVOLVE: Predicting User Evolution and Network Dynamics in Social Media Using Fine-Tuned GPT-like Model
Social media platforms are extensively used for sharing personal emotions, daily activities, and various life events, keeping people updated with the latest happenings. From the moment a user creates an account, they continually expand their network of friends or followers, freely interacting with others by posting, commenting, and sharing content. Over time, user behavior evolves based on demographic attributes and the networks they establish. In this research, we propose a predictive method to understand how a user evolves on social media throughout their life and to forecast the next stage of their evolution. We fine-tune a GPT-like decoder-only model (we named it E-GPT: Evolution-GPT) to predict the future stages of a user's evolution in online social media. We evaluate the performance of these models and demonstrate how user attributes influence changes within their network by predicting future connections and shifts in user activities on social media, which also addresses other social media challenges such as recommendation systems.
[ "['Ismail Hossain' 'Md Jahangir Alam' 'Sai Puppala' 'Sajedul Talukder']" ]
null
null
2407.09693
null
null
http://arxiv.org/pdf/2407.09693v1
2024-07-12T21:26:21Z
2024-07-12T21:26:21Z
A Mathematical Framework, a Taxonomy of Modeling Paradigms, and a Suite of Learning Techniques for Neural-Symbolic Systems
The field of Neural-Symbolic (NeSy) systems is growing rapidly. Proposed approaches show great promise in achieving symbiotic unions of neural and symbolic methods. However, each NeSy system differs in fundamental ways. There is a pressing need for a unifying theory to illuminate the commonalities and differences in approaches and enable further progress. In this paper, we introduce Neural-Symbolic Energy-Based Models (NeSy-EBMs), a unifying mathematical framework for discriminative and generative modeling with probabilistic and non-probabilistic NeSy approaches. We utilize NeSy-EBMs to develop a taxonomy of modeling paradigms focusing on a system's neural-symbolic interface and reasoning capabilities. Additionally, we introduce a suite of learning techniques for NeSy-EBMs. Importantly, NeSy-EBMs allow the derivation of general expressions for gradients of prominent learning losses, and we provide four learning approaches that leverage methods from multiple domains, including bilevel and stochastic policy optimization. Finally, we present Neural Probabilistic Soft Logic (NeuPSL), an open-source NeSy-EBM library designed for scalability and expressivity, facilitating real-world application of NeSy systems. Through extensive empirical analysis across multiple datasets, we demonstrate the practical advantages of NeSy-EBMs in various tasks, including image classification, graph node labeling, autonomous vehicle situation awareness, and question answering.
[ "['Charles Dickens' 'Connor Pryor' 'Changyu Gao' 'Alon Albalak'\n 'Eriq Augustine' 'William Wang' 'Stephen Wright' 'Lise Getoor']" ]
null
null
2407.09698
null
null
http://arxiv.org/pdf/2407.09698v1
2024-07-12T21:42:51Z
2024-07-12T21:42:51Z
RIO-CPD: A Riemannian Geometric Method for Correlation-aware Online Change Point Detection
The objective of change point detection is to identify abrupt changes at potentially multiple points within a data sequence. This task is particularly challenging in the online setting where various types of changes can occur, including shifts in both the marginal and joint distributions of the data. This paper tackles these challenges by sequentially tracking correlation matrices on the Riemannian geometry, where the geodesic distances accurately capture the development of correlations. We propose Rio-CPD, a non-parametric correlation-aware online change point detection framework that combines the Riemannian geometry of the manifold of symmetric positive definite matrices and the cumulative sum statistic (CUSUM) for detecting change points. Rio-CPD enhances CUSUM by computing the geodesic distance from present observations to the Fr'echet mean of previous observations. With careful choice of metrics equipped to the Riemannian geometry, Rio-CPD is simple and computationally efficient. Experimental results on both synthetic and real-world datasets demonstrate that Rio-CPD outperforms existing methods in detection accuracy and efficiency.
[ "['Chengyuan Deng' 'Zhengzhang Chen' 'Xujiang Zhao' 'Haoyu Wang'\n 'Junxiang Wang' 'Haifeng Chen' 'Jie Gao']" ]
null
null
2407.09702
null
null
http://arxiv.org/pdf/2407.09702v1
2024-07-12T21:56:24Z
2024-07-12T21:56:24Z
Investigating the Interplay of Prioritized Replay and Generalization
Experience replay is ubiquitous in reinforcement learning, to reuse past data and improve sample efficiency. Though a variety of smart sampling schemes have been introduced to improve performance, uniform sampling by far remains the most common approach. One exception is Prioritized Experience Replay (PER), where sampling is done proportionally to TD errors, inspired by the success of prioritized sweeping in dynamic programming. The original work on PER showed improvements in Atari, but follow-up results are mixed. In this paper, we investigate several variations on PER, to attempt to understand where and when PER may be useful. Our findings in prediction tasks reveal that while PER can improve value propagation in tabular settings, behavior is significantly different when combined with neural networks. Certain mitigations -- like delaying target network updates to control generalization and using estimates of expected TD errors in PER to avoid chasing stochasticity -- can avoid large spikes in error with PER and neural networks, but nonetheless generally do not outperform uniform replay. In control tasks, none of the prioritized variants consistently outperform uniform replay.
[ "['Parham Mohammad Panahi' 'Andrew Patterson' 'Martha White' 'Adam White']" ]
null
null
2407.09709
null
null
http://arxiv.org/pdf/2407.09709v1
2024-07-12T22:23:51Z
2024-07-12T22:23:51Z
GOFA: A Generative One-For-All Model for Joint Graph Language Modeling
Foundation models, such as Large Language Models (LLMs) or Large Vision Models (LVMs), have emerged as one of the most powerful tools in the respective fields. However, unlike text and image data, graph data do not have a definitive structure, posing great challenges to developing a Graph Foundation Model (GFM). For example, current attempts at designing general graph models either transform graph data into a language format for LLM-based prediction or still train a GNN model with LLM as an assistant. The former can handle unlimited tasks, while the latter captures graph structure much better -- yet, no existing work can achieve both simultaneously. In this paper, we identify three key desirable properties of a GFM: self-supervised pretraining, fluidity in tasks, and graph awareness. To account for these properties, we extend the conventional language modeling to the graph domain and propose a novel generative graph language model GOFA to solve the problem. The model interleaves randomly initialized GNN layers into a frozen pre-trained LLM so that the semantic and structural modeling abilities are organically combined. GOFA is pre-trained on newly proposed graph-level next-word prediction, question-answering, and structural tasks to obtain the above GFM properties. The pre-trained model is further fine-tuned on downstream tasks to obtain task-solving ability. The fine-tuned model is evaluated on various downstream tasks, demonstrating a strong ability to solve structural and contextual problems in zero-shot scenarios. The code is available at https://github.com/JiaruiFeng/GOFA.
[ "['Lecheng Kong' 'Jiarui Feng' 'Hao Liu' 'Chengsong Huang' 'Jiaxin Huang'\n 'Yixin Chen' 'Muhan Zhang']" ]
null
null
2407.09717
null
null
http://arxiv.org/pdf/2407.09717v1
2024-07-12T23:07:37Z
2024-07-12T23:07:37Z
Deep-TEMPEST: Using Deep Learning to Eavesdrop on HDMI from its Unintended Electromagnetic Emanations
In this work, we address the problem of eavesdropping on digital video displays by analyzing the electromagnetic waves that unintentionally emanate from the cables and connectors, particularly HDMI. This problem is known as TEMPEST. Compared to the analog case (VGA), the digital case is harder due to a 10-bit encoding that results in a much larger bandwidth and non-linear mapping between the observed signal and the pixel's intensity. As a result, eavesdropping systems designed for the analog case obtain unclear and difficult-to-read images when applied to digital video. The proposed solution is to recast the problem as an inverse problem and train a deep learning module to map the observed electromagnetic signal back to the displayed image. However, this approach still requires a detailed mathematical analysis of the signal, firstly to determine the frequency at which to tune but also to produce training samples without actually needing a real TEMPEST setup. This saves time and avoids the need to obtain these samples, especially if several configurations are being considered. Our focus is on improving the average Character Error Rate in text, and our system improves this rate by over 60 percentage points compared to previous available implementations. The proposed system is based on widely available Software Defined Radio and is fully open-source, seamlessly integrated into the popular GNU Radio framework. We also share the dataset we generated for training, which comprises both simulated and over 1000 real captures. Finally, we discuss some countermeasures to minimize the potential risk of being eavesdropped by systems designed based on similar principles.
[ "['Santiago Fernández' 'Emilio Martínez' 'Gabriel Varela' 'Pablo Musé'\n 'Federico Larroca']" ]
null
null
2407.09719
null
null
http://arxiv.org/pdf/2407.09719v1
2024-07-12T23:27:33Z
2024-07-12T23:27:33Z
MSEval: A Dataset for Material Selection in Conceptual Design to Evaluate Algorithmic Models
Material selection plays a pivotal role in many industries, from manufacturing to construction. Material selection is usually carried out after several cycles of conceptual design, during which designers iteratively refine the design solution and the intended manufacturing approach. In design research, material selection is typically treated as an optimization problem with a single correct answer. Moreover, it is also often restricted to specific types of objects or design functions, which can make the selection process computationally expensive and time-consuming. In this paper, we introduce MSEval, a novel dataset which is comprised of expert material evaluations across a variety of design briefs and criteria. This data is designed to serve as a benchmark to facilitate the evaluation and modification of machine learning models in the context of material selection for conceptual design.
[ "['Yash Patawari Jain' 'Daniele Grandi' 'Allin Groom' 'Brandon Cramer'\n 'Christopher McComb']" ]
null
null
2407.09722
null
null
http://arxiv.org/pdf/2407.09722v1
2024-07-12T23:29:54Z
2024-07-12T23:29:54Z
Multi-Token Joint Speculative Decoding for Accelerating Large Language Model Inference
Transformer-based Large language models (LLMs) have demonstrated their power in various tasks, but their inference incurs significant time and energy costs. To accelerate LLM inference, speculative decoding uses a smaller model to propose one sequence of tokens, which are subsequently validated in batch by the target large model. Compared with autoregressive decoding, speculative decoding generates the same number of tokens with fewer runs of the large model, hence accelerating the overall inference by $1$-$2times$. However, greedy decoding is not the optimal decoding algorithm in terms of output perplexity, which is a direct measurement of the effectiveness of a decoding algorithm. An algorithm that has better output perplexity and even better efficiency than speculative decoding can be more useful in practice. To achieve this seemingly contradictory goal, we first introduce multi-token joint greedy decoding (MJGD), which greedily generates multiple tokens at each step based on their joint perplexity. We show that it leads to better perplexity for the whole output. But the computation cost of MJGD is infeasible in practice. So we further propose multi-token joint speculative decoding (MJSD), which approximates and accelerates the MJGD from two aspects: it approximates the joint distribution of the large model with that of a small model, and uses a verification step to guarantee the accuracy of approximation; then it uses beam decoding to accelerate the sequence generation from the joint distribution. Compared with vanilla speculative decoding, MJSD has two advantages: (1) it is an approximation of MJGD, thus achieving better output perplexity; (2) verification with joint likelihood allows it to accept the longest prefix sub-sequence of the draft tokens with valid perplexity, leading to better efficiency...
[ "['Zongyue Qin' 'Ziniu Hu' 'Zifan He' 'Neha Prakriya' 'Jason Cong'\n 'Yizhou Sun']" ]
null
null
2407.09726
null
null
http://arxiv.org/pdf/2407.09726v1
2024-07-13T00:16:26Z
2024-07-13T00:16:26Z
On Mitigating Code LLM Hallucinations with API Documentation
In this study, we address the issue of API hallucinations in various software engineering contexts. We introduce CloudAPIBench, a new benchmark designed to measure API hallucination occurrences. CloudAPIBench also provides annotations for frequencies of API occurrences in the public domain, allowing us to study API hallucinations at various frequency levels. Our findings reveal that Code LLMs struggle with low frequency APIs: for e.g., GPT-4o achieves only 38.58% valid low frequency API invocations. We demonstrate that Documentation Augmented Generation (DAG) significantly improves performance for low frequency APIs (increase to 47.94% with DAG) but negatively impacts high frequency APIs when using sub-optimal retrievers (a 39.02% absolute drop). To mitigate this, we propose to intelligently trigger DAG where we check against an API index or leverage Code LLMs' confidence scores to retrieve only when needed. We demonstrate that our proposed methods enhance the balance between low and high frequency API performance, resulting in more reliable API invocations (8.20% absolute improvement on CloudAPIBench for GPT-4o).
[ "['Nihal Jain' 'Robert Kwiatkowski' 'Baishakhi Ray'\n 'Murali Krishna Ramanathan' 'Varun Kumar']" ]
null
null
2407.09728
null
null
http://arxiv.org/pdf/2407.09728v1
2024-07-13T00:26:14Z
2024-07-13T00:26:14Z
Neural Operator-Based Proxy for Reservoir Simulations Considering Varying Well Settings, Locations, and Permeability Fields
Simulating Darcy flows in porous media is fundamental to understand the future flow behavior of fluids in hydrocarbon and carbon storage reservoirs. Geological models of reservoirs are often associated with high uncertainly leading to many numerical simulations for history matching and production optimization. Machine learning models trained with simulation data can provide a faster alternative to traditional simulators. In this paper we present a single Fourier Neural Operator (FNO) surrogate that outperforms traditional reservoir simulators by the ability to predict pressures and saturations on varying permeability fields, well locations, well controls, and number of wells. The maximum-mean relative error of 95% of pressure and saturation predictions is less than 5%. This is achieved by employing a simple yet very effective data augmentation technique that reduces the dataset size by 75% and reduces overfitting. Also, constructing the input tensor in a binary fashion enables predictions on unseen well locations, well controls, and number of wells. Such model can accelerate history matching and reservoir characterization procedures by several orders of magnitude. The ability to predict on new well locations, well controls, and number of wells enables highly efficient reservoir management and optimization.
[ "['Daniel Badawi' 'Eduardo Gildin']" ]
null
null
2407.09732
null
null
http://arxiv.org/pdf/2407.09732v1
2024-07-13T00:35:21Z
2024-07-13T00:35:21Z
Speech Slytherin: Examining the Performance and Efficiency of Mamba for Speech Separation, Recognition, and Synthesis
It is too early to conclude that Mamba is a better alternative to transformers for speech before comparing Mamba with transformers in terms of both performance and efficiency in multiple speech-related tasks. To reach this conclusion, we propose and evaluate three models for three tasks: Mamba-TasNet for speech separation, ConMamba for speech recognition, and VALL-M for speech synthesis. We compare them with transformers of similar sizes in performance, memory, and speed. Our Mamba or Mamba-transformer hybrid models show comparable or higher performance than their transformer counterparts: Sepformer, Conformer, and VALL-E. They are more efficient than transformers in memory and speed for speech longer than a threshold duration, inversely related to the resolution of a speech token. Mamba for separation is the most efficient, and Mamba for recognition is the least. Further, we show that Mamba is not more efficient than transformer for speech shorter than the threshold duration and performs worse in models that require joint modeling of text and speech, such as cross or masked attention of two inputs. Therefore, we argue that the superiority of Mamba or transformer depends on particular problems and models. Code available at https://github.com/xi-j/Mamba-TasNet and https://github.com/xi-j/Mamba-ASR.
[ "['Xilin Jiang' 'Yinghao Aaron Li' 'Adrian Nicolas Florea' 'Cong Han'\n 'Nima Mesgarani']" ]
null
null
2407.09739
null
null
http://arxiv.org/pdf/2407.09739v1
2024-07-13T01:41:12Z
2024-07-13T01:41:12Z
Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes
We consider the problem of active learning for global sensitivity analysis of expensive black-box functions. Our aim is to efficiently learn the importance of different input variables, e.g., in vehicle safety experimentation, we study the impact of the thickness of various components on safety objectives. Since function evaluations are expensive, we use active learning to prioritize experimental resources where they yield the most value. We propose novel active learning acquisition functions that directly target key quantities of derivative-based global sensitivity measures (DGSMs) under Gaussian process surrogate models. We showcase the first application of active learning directly to DGSMs, and develop tractable uncertainty reduction and information gain acquisition functions for these measures. Through comprehensive evaluation on synthetic and real-world problems, our study demonstrates how these active learning acquisition strategies substantially enhance the sample efficiency of DGSM estimation, particularly with limited evaluation budgets. Our work paves the way for more efficient and accurate sensitivity analysis in various scientific and engineering applications.
[ "['Syrine Belakaria' 'Benjamin Letham' 'Janardhan Rao Doppa'\n 'Barbara Engelhardt' 'Stefano Ermon' 'Eytan Bakshy']" ]
null
null
2407.09747
null
null
http://arxiv.org/pdf/2407.09747v1
2024-07-13T02:46:37Z
2024-07-13T02:46:37Z
SocialRec: User Activity Based Post Weighted Dynamic Personalized Post Recommendation System in Social Media
User activities can influence their subsequent interactions with a post, generating interest in the user. Typically, users interact with posts from friends by commenting and using reaction emojis, reflecting their level of interest on social media such as Facebook, Twitter, and Reddit. Our objective is to analyze user history over time, including their posts and engagement on various topics. Additionally, we take into account the user's profile, seeking connections between their activities and social media platforms. By integrating user history, engagement, and persona, we aim to assess recommendation scores based on relevant item sharing by Hit Rate (HR) and the quality of the ranking system by Normalized Discounted Cumulative Gain (NDCG), where we achieve the highest for NeuMF 0.80 and 0.6 respectively. Our hybrid approach solves the cold-start problem when there is a new user, for new items cold-start problem will never occur, as we consider the post category values. To improve the performance of the model during cold-start we introduce collaborative filtering by looking for similar users and ranking the users based on the highest similarity scores.
[ "['Ismail Hossain' 'Sai Puppala' 'Md Jahangir Alam' 'Sajedul Talukder']" ]
null
null
2407.09753
null
null
http://arxiv.org/pdf/2407.09753v1
2024-07-13T03:09:22Z
2024-07-13T03:09:22Z
Biased Backpressure Routing Using Link Features and Graph Neural Networks
To reduce the latency of Backpressure (BP) routing in wireless multi-hop networks, we propose to enhance the existing shortest path-biased BP (SP-BP) and sojourn time-based backlog metrics, since they introduce no additional time step-wise signaling overhead to the basic BP. Rather than relying on hop-distance, we introduce a new edge-weighted shortest path bias built on the scheduling duty cycle of wireless links, which can be predicted by a graph convolutional neural network based on the topology and traffic of wireless networks. Additionally, we tackle three long-standing challenges associated with SP-BP: optimal bias scaling, efficient bias maintenance, and integration of delay awareness. Our proposed solutions inherit the throughput optimality of the basic BP, as well as its practical advantages of low complexity and fully distributed implementation. Our approaches rely on common link features and introduces only a one-time constant overhead to previous SP-BP schemes, or a one-time overhead linear in the network size to the basic BP. Numerical experiments show that our solutions can effectively address the major drawbacks of slow startup, random walk, and the last packet problem in basic BP, improving the end-to-end delay of existing low-overhead BP algorithms under various settings of network traffic, interference, and mobility.
[ "['Zhongyuan Zhao' 'Bojan Radojičić' 'Gunjan Verma' 'Ananthram Swami'\n 'Santiago Segarra']" ]
null
null
2407.09775
null
null
http://arxiv.org/pdf/2407.09775v1
2024-07-13T05:08:06Z
2024-07-13T05:08:06Z
Learning Weighted Finite Automata over the Max-Plus Semiring and its Termination
Active learning of finite automata has been vigorously pursued for the purposes of analysis and explanation of black-box systems. In this paper, we study an L*-style learning algorithm for weighted automata over the max-plus semiring. The max-plus setting exposes a "consistency" issue in the previously studied semiring-generic extension of L*: we show that it can fail to maintain consistency of tables, and can thus make equivalence queries on obviously wrong hypothesis automata. We present a theoretical fix by a mathematically clean notion of column-closedness. We also present a nontrivial and reasonably broad class of weighted languages over the max-plus semiring in which our algorithm terminates.
[ "['Takamasa Okudono' 'Masaki Waga' 'Taro Sekiyama' 'Ichiro Hasuo']" ]
null
null
2407.09777
null
null
http://arxiv.org/pdf/2407.09777v1
2024-07-13T05:15:24Z
2024-07-13T05:15:24Z
Graph Transformers: A Survey
Graph transformers are a recent advancement in machine learning, offering a new class of neural network models for graph-structured data. The synergy between transformers and graph learning demonstrates strong performance and versatility across various graph-related tasks. This survey provides an in-depth review of recent progress and challenges in graph transformer research. We begin with foundational concepts of graphs and transformers. We then explore design perspectives of graph transformers, focusing on how they integrate graph inductive biases and graph attention mechanisms into the transformer architecture. Furthermore, we propose a taxonomy classifying graph transformers based on depth, scalability, and pre-training strategies, summarizing key principles for effective development of graph transformer models. Beyond technical analysis, we discuss the applications of graph transformer models for node-level, edge-level, and graph-level tasks, exploring their potential in other application scenarios as well. Finally, we identify remaining challenges in the field, such as scalability and efficiency, generalization and robustness, interpretability and explainability, dynamic and complex graphs, as well as data quality and diversity, charting future directions for graph transformer research.
[ "['Ahsan Shehzad' 'Feng Xia' 'Shagufta Abid' 'Ciyuan Peng' 'Shuo Yu'\n 'Dongyu Zhang' 'Karin Verspoor']" ]
null
null
2407.09788
null
null
http://arxiv.org/pdf/2407.09788v1
2024-07-13T07:04:28Z
2024-07-13T07:04:28Z
Explanation is All You Need in Distillation: Mitigating Bias and Shortcut Learning
Bias and spurious correlations in data can cause shortcut learning, undermining out-of-distribution (OOD) generalization in deep neural networks. Most methods require unbiased data during training (and/or hyper-parameter tuning) to counteract shortcut learning. Here, we propose the use of explanation distillation to hinder shortcut learning. The technique does not assume any access to unbiased data, and it allows an arbitrarily sized student network to learn the reasons behind the decisions of an unbiased teacher, such as a vision-language model or a network processing debiased images. We found that it is possible to train a neural network with explanation (e.g by Layer Relevance Propagation, LRP) distillation only, and that the technique leads to high resistance to shortcut learning, surpassing group-invariant learning, explanation background minimization, and alternative distillation techniques. In the COLOURED MNIST dataset, LRP distillation achieved 98.2% OOD accuracy, while deep feature distillation and IRM achieved 92.1% and 60.2%, respectively. In COCO-on-Places, the undesirable generalization gap between in-distribution and OOD accuracy is only of 4.4% for LRP distillation, while the other two techniques present gaps of 15.1% and 52.1%, respectively.
[ "['Pedro R. A. S. Bassi' 'Andrea Cavalli' 'Sergio Decherchi']" ]
null
null
2407.09789
null
null
http://arxiv.org/pdf/2407.09789v1
2024-07-13T07:07:35Z
2024-07-13T07:07:35Z
Convex space learning for tabular synthetic data generation
Generating synthetic samples from the convex space of the minority class is a popular oversampling approach for imbalanced classification problems. Recently, deep-learning approaches have been successfully applied to modeling the convex space of minority samples. Beyond oversampling, learning the convex space of neighborhoods in training data has not been used to generate entire tabular datasets. In this paper, we introduce a deep learning architecture (NextConvGeN) with a generator and discriminator component that can generate synthetic samples by learning to model the convex space of tabular data. The generator takes data neighborhoods as input and creates synthetic samples within the convex space of that neighborhood. Thereafter, the discriminator tries to classify these synthetic samples against a randomly sampled batch of data from the rest of the data space. We compared our proposed model with five state-of-the-art tabular generative models across ten publicly available datasets from the biomedical domain. Our analysis reveals that synthetic samples generated by NextConvGeN can better preserve classification and clustering performance across real and synthetic data than other synthetic data generation models. Synthetic data generation by deep learning of the convex space produces high scores for popular utility measures. We further compared how diverse synthetic data generation strategies perform in the privacy-utility spectrum and produced critical arguments on the necessity of high utility models. Our research on deep learning of the convex space of tabular data opens up opportunities in clinical research, machine learning model development, decision support systems, and clinical data sharing.
[ "['Manjunath Mahendra' 'Chaithra Umesh' 'Saptarshi Bej' 'Kristian Schultz'\n 'Olaf Wolkenhauer']" ]
null
null
2407.09790
null
null
http://arxiv.org/pdf/2407.09790v1
2024-07-13T07:13:32Z
2024-07-13T07:13:32Z
Team up GBDTs and DNNs: Advancing Efficient and Effective Tabular Prediction with Tree-hybrid MLPs
Tabular datasets play a crucial role in various applications. Thus, developing efficient, effective, and widely compatible prediction algorithms for tabular data is important. Currently, two prominent model types, Gradient Boosted Decision Trees (GBDTs) and Deep Neural Networks (DNNs), have demonstrated performance advantages on distinct tabular prediction tasks. However, selecting an effective model for a specific tabular dataset is challenging, often demanding time-consuming hyperparameter tuning. To address this model selection dilemma, this paper proposes a new framework that amalgamates the advantages of both GBDTs and DNNs, resulting in a DNN algorithm that is as efficient as GBDTs and is competitively effective regardless of dataset preferences for GBDTs or DNNs. Our idea is rooted in an observation that deep learning (DL) offers a larger parameter space that can represent a well-performing GBDT model, yet the current back-propagation optimizer struggles to efficiently discover such optimal functionality. On the other hand, during GBDT development, hard tree pruning, entropy-driven feature gate, and model ensemble have proved to be more adaptable to tabular data. By combining these key components, we present a Tree-hybrid simple MLP (T-MLP). In our framework, a tensorized, rapidly trained GBDT feature gate, a DNN architecture pruning approach, as well as a vanilla back-propagation optimizer collaboratively train a randomly initialized MLP model. Comprehensive experiments show that T-MLP is competitive with extensively tuned DNNs and GBDTs in their dominating tabular benchmarks (88 datasets) respectively, all achieved with compact model storage and significantly reduced training duration.
[ "['Jiahuan Yan' 'Jintai Chen' 'Qianxing Wang' 'Danny Z. Chen' 'Jian Wu']" ]
null
null
2407.09801
null
null
http://arxiv.org/pdf/2407.09801v1
2024-07-13T08:20:37Z
2024-07-13T08:20:37Z
IoT-LM: Large Multisensory Language Models for the Internet of Things
The Internet of Things (IoT) network integrating billions of smart physical devices embedded with sensors, software, and communication technologies is a critical and rapidly expanding component of our modern world. The IoT ecosystem provides a rich source of real-world modalities such as motion, thermal, geolocation, imaging, depth, sensors, and audio to recognize the states of humans and physical objects. Machine learning presents a rich opportunity to automatically process IoT data at scale, enabling efficient inference for understanding human wellbeing, controlling physical devices, and interconnecting smart cities. To realize this potential, we introduce IoT-LM, an open-source large multisensory language model tailored for the IoT ecosystem. IoT-LM is enabled by two technical contributions: the first is MultiIoT, the most expansive unified IoT dataset to date, encompassing over 1.15 million samples from 12 modalities and 8 tasks prepared for multisensory pre-training and instruction-tuning. The second is a new multisensory multitask adapter layer to condition pre-trained large language models on multisensory IoT data. Not only does IoT-LM yield substantial improvements on 8 supervised IoT classification tasks, but it also demonstrates new interactive question-answering, reasoning, and dialog capabilities conditioned on IoT sensors. We release IoT-LM's data sources and new multisensory language modeling framework.
[ "['Shentong Mo' 'Russ Salakhutdinov' 'Louis-Philippe Morency'\n 'Paul Pu Liang']" ]
null
null
2407.09845
null
null
http://arxiv.org/pdf/2407.09845v1
2024-07-13T10:45:21Z
2024-07-13T10:45:21Z
Towards understanding epoch-wise double descent in two-layer linear neural networks
Epoch-wise double descent is the phenomenon where generalisation performance improves beyond the point of overfitting, resulting in a generalisation curve exhibiting two descents under the course of learning. Understanding the mechanisms driving this behaviour is crucial not only for understanding the generalisation behaviour of machine learning models in general, but also for employing conventional selection methods, such as the use of early stopping to mitigate overfitting. While we ultimately want to draw conclusions of more complex models, such as deep neural networks, a majority of theoretical conclusions regarding the underlying cause of epoch-wise double descent are based on simple models, such as standard linear regression. To start bridging this gap, we study epoch-wise double descent in two-layer linear neural networks. First, we derive a gradient flow for the linear two-layer model, that bridges the learning dynamics of the standard linear regression model, and the linear two-layer diagonal network with quadratic weights. Second, we identify additional factors of epoch-wise double descent emerging with the extra model layer, by deriving necessary conditions for the generalisation error to follow a double descent pattern. While epoch-wise double descent in linear regression has been attributed to differences in input variance, in the two-layer model, also the singular values of the input-output covariance matrix play an important role. This opens up for further questions regarding unidentified factors of epoch-wise double descent for truly deep models.
[ "['Amanda Olmin' 'Fredrik Lindsten']" ]
null
null
2407.09849
null
null
http://arxiv.org/abs/2407.09849v1
2024-07-13T11:11:41Z
2024-07-13T11:11:41Z
Text-Based Detection of On-Hold Scripts in Contact Center Calls
Average hold time is a concern for call centers because it affects customer satisfaction. Contact centers should instruct their agents to use special on-hold scripts to maintain positive interactions with clients. This study presents a natural language processing model that detects on-hold phrases in customer service calls transcribed by automatic speech recognition technology. The task of finding hold scripts in dialogue was formulated as a multiclass text classification problem with three mutually exclusive classes: scripts for putting a client on hold, scripts for returning to a client, and phrases irrelevant to on-hold scripts. We collected an in-house dataset of calls and labeled each dialogue turn in each call. We fine-tuned RuBERT on the dataset by exploring various hyperparameter sets and achieved high model performance. The developed model can help agent monitoring by providing a way to check whether an agent follows predefined on-hold scripts.
[ "['Dmitrii Galimzianov' 'Viacheslav Vyshegorodtsev']" ]
null
null
2407.09852
null
null
http://arxiv.org/pdf/2407.09852v1
2024-07-13T11:22:12Z
2024-07-13T11:22:12Z
Free-form Grid Structure Form Finding based on Machine Learning and Multi-objective Optimisation
Free-form structural forms are widely used to design spatial structures for their irregular spatial morphology. Current free-form form-finding methods cannot adequately meet the material properties, structural requirements or construction conditions, which brings the deviation between the initial 3D geometric design model and the constructed free-form structure. Thus, the main focus of this paper is to improve the rationality of free-form morphology considering multiple objectives in line with the characteristics and constraints of material. In this paper, glued laminated timber is selected as a case. Firstly, machine learning is adopted based on the predictive capability. By selecting a free-form timber grid structure and following the principles of NURBS, the free-form structure is simplified into free-form curves. The transformer is selected to train and predict the curvatures of the curves considering the material characteristics. After predicting the curvatures, the curves are transformed into vectors consisting of control points, weights, and knot vectors. To ensure the constructability and robustness of the structure, minimising the mass of the structure, stress and strain energy are the optimisation objectives. Two parameters (weight and the z-coordinate of the control points) of the free-from morphology are extracted as the variables of the free-form morphology to conduct the optimisation. The evaluation algorithm was selected as the optimal tool due to its capability to optimise multiple parameters. While optimising the two variables, the mechanical performance evaluation indexes such as the maximum displacement in the z-direction are demonstrated in the 60th step. The optimisation results for structure mass, stress and strain energy after 60 steps show the tendency of oscillation convergence, which indicates the efficiency of the proposal multi-objective optimisation.
[ "['Yiping Meng' 'Yiming Sun']" ]
null
null
2407.09877
null
null
http://arxiv.org/pdf/2407.09877v1
2024-07-13T12:54:57Z
2024-07-13T12:54:57Z
Model-free Distortion Canceling and Control of Quantum Devices
Quantum devices need precise control to achieve their full capability. In this work, we address the problem of controlling closed quantum systems, tackling two main issues. First, in practice the control signals are usually subject to unknown classical distortions that could arise from the device fabrication, material properties and/or instruments generating those signals. Second, in most cases modeling the system is very difficult or not even viable due to uncertainties in the relations between some variables and inaccessibility to some measurements inside the system. In this paper, we introduce a general model-free control approach based on deep reinforcement learning (DRL), that can work for any closed quantum system. We train a deep neural network (NN), using the REINFORCE policy gradient algorithm to control the state probability distribution of a closed quantum system as it evolves, and drive it to different target distributions. We present a novel controller architecture that comprises multiple NNs. This enables accommodating as many different target state distributions as desired, without increasing the complexity of the NN or its training process. The used DRL algorithm works whether the control problem can be modeled as a Markov decision process (MDP) or a partially observed MDP. Our method is valid whether the control signals are discrete- or continuous-valued. We verified our method through numerical simulations based on a photonic waveguide array chip. We trained a controller to generate sequences of different target output distributions of the chip with fidelity higher than 99%, where the controller showed superior performance in canceling the classical signal distortions.
[ "['Ahmed F. Fouad' 'Akram Youssry' 'Ahmed El-Rafei' 'Sherif Hammad']" ]
null
null
2407.09887
null
null
http://arxiv.org/pdf/2407.09887v1
2024-07-13T13:27:57Z
2024-07-13T13:27:57Z
Benchmarking LLMs for Optimization Modeling and Enhancing Reasoning via Reverse Socratic Synthesis
Large language models (LLMs) have exhibited their problem-solving ability in mathematical reasoning. Solving realistic optimization (OPT) problems in industrial application scenarios requires advanced and applied math ability. However, current OPT benchmarks that merely solve linear programming are far from complex realistic situations. In this work, we propose E-OPT, a benchmark for end-to-end optimization problem-solving with human-readable inputs and outputs. E-OPT contains rich optimization problems, including linear/nonlinear programming with/without table data, which can comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are required to correctly understand the problem in E-OPT and call code solver to get precise numerical answers. Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e.g., Llama-2-7b and Llama-3-8b) and closed-source LLMs (e.g., GPT-4), we further propose a novel data synthesis method namely ReSocratic. Unlike general data synthesis methods that proceed from questions to answers, ReSocratic first incrementally synthesizes optimization scenarios with mathematical formulations step by step and then back-translates the generated scenarios into questions. In such a way, we construct the ReSocratic-29k dataset from a small seed sample pool with the powerful open-source large model DeepSeek-V2. To demonstrate the effectiveness of ReSocratic, we conduct supervised fine-tuning with ReSocratic-29k on multiple open-source models. The results show that Llama3-8b is significantly improved from 13.6% to 51.7% on E-OPT, while DeepSeek-V2 reaches 61.0%, approaching 65.5% of GPT-4.
[ "['Zhicheng Yang' 'Yinya Huang' 'Wei Shi' 'Liang Feng' 'Linqi Song'\n 'Yiwei Wang' 'Xiaodan Liang' 'Jing Tang']" ]
null
null
2407.09904
null
null
http://arxiv.org/pdf/2407.09904v1
2024-07-13T14:42:22Z
2024-07-13T14:42:22Z
Learning a Mini-batch Graph Transformer via Two-stage Interaction Augmentation
Mini-batch Graph Transformer (MGT), as an emerging graph learning model, has demonstrated significant advantages in semi-supervised node prediction tasks with improved computational efficiency and enhanced model robustness. However, existing methods for processing local information either rely on sampling or simple aggregation, which respectively result in the loss and squashing of critical neighbor information.Moreover, the limited number of nodes in each mini-batch restricts the model's capacity to capture the global characteristic of the graph. In this paper, we propose LGMformer, a novel MGT model that employs a two-stage augmented interaction strategy, transitioning from local to global perspectives, to address the aforementioned bottlenecks.The local interaction augmentation (LIA) presents a neighbor-target interaction Transformer (NTIformer) to acquire an insightful understanding of the co-interaction patterns between neighbors and the target node, resulting in a locally effective token list that serves as input for the MGT. In contrast, global interaction augmentation (GIA) adopts a cross-attention mechanism to incorporate entire graph prototypes into the target node epresentation, thereby compensating for the global graph information to ensure a more comprehensive perception. To this end, LGMformer achieves the enhancement of node representations under the MGT paradigm.Experimental results related to node classification on the ten benchmark datasets demonstrate the effectiveness of the proposed method. Our code is available at https://github.com/l-wd/LGMformer.
[ "['Wenda Li' 'Kaixuan Chen' 'Shunyu Liu' 'Tongya Zheng' 'Wenjie Huang'\n 'Mingli Song']" ]
null
null
2407.09905
null
null
http://arxiv.org/pdf/2407.09905v1
2024-07-13T14:45:08Z
2024-07-13T14:45:08Z
Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods
In classic Reinforcement Learning (RL), the agent maximizes an additive objective of the visited states, e.g., a value function. Unfortunately, objectives of this type cannot model many real-world applications such as experiment design, exploration, imitation learning, and risk-averse RL to name a few. This is due to the fact that additive objectives disregard interactions between states that are crucial for certain tasks. To tackle this problem, we introduce Global RL (GRL), where rewards are globally defined over trajectories instead of locally over states. Global rewards can capture negative interactions among states, e.g., in exploration, via submodularity, positive interactions, e.g., synergetic effects, via supermodularity, while mixed interactions via combinations of them. By exploiting ideas from submodular optimization, we propose a novel algorithmic scheme that converts any GRL problem to a sequence of classic RL problems and solves it efficiently with curvature-dependent approximation guarantees. We also provide hardness of approximation results and empirically demonstrate the effectiveness of our method on several GRL instances.
[ "['Riccardo De Santi' 'Manish Prajapat' 'Andreas Krause']" ]
null
null
2407.09911
null
null
http://arxiv.org/pdf/2407.09911v1
2024-07-13T15:10:58Z
2024-07-13T15:10:58Z
SensEmo: Enabling Affective Learning through Real-time Emotion Recognition with Smartwatches
Recent research has demonstrated the capability of physiological signals to infer both user emotional and attention responses. This presents an opportunity for leveraging widely available physiological sensors in smartwatches, to detect real-time emotional cues in users, such as stress and excitement. In this paper, we introduce SensEmo, a smartwatch-based system designed for affective learning. SensEmo utilizes multiple physiological sensor data, including heart rate and galvanic skin response, to recognize a student's motivation and concentration levels during class. This recognition is facilitated by a personalized emotion recognition model that predicts emotional states based on degrees of valence and arousal. With real-time emotion and attention feedback from students, we design a Markov decision process-based algorithm to enhance student learning effectiveness and experience by by offering suggestions to the teacher regarding teaching content and pacing. We evaluate SensEmo with 22 participants in real-world classroom environments. Evaluation results show that SensEmo recognizes student emotion with an average of 88.9% accuracy. More importantly, SensEmo assists students to achieve better online learning outcomes, e.g., an average of 40.0% higher grades in quizzes, over the traditional learning without student emotional feedback.
[ "['Kushan Choksi' 'Hongkai Chen' 'Karan Joshi' 'Sukrutha Jade'\n 'Shahriar Nirjon' 'Shan Lin']" ]
null
null
2407.09926
null
null
http://arxiv.org/pdf/2407.09926v1
2024-07-13T15:41:14Z
2024-07-13T15:41:14Z
Metric Learning for Clifford Group Equivariant Neural Networks
Clifford Group Equivariant Neural Networks (CGENNs) leverage Clifford algebras and multivectors as an alternative approach to incorporating group equivariance to ensure symmetry constraints in neural representations. In principle, this formulation generalizes to orthogonal groups and preserves equivariance regardless of the metric signature. However, previous works have restricted internal network representations to Euclidean or Minkowski (pseudo-)metrics, handpicked depending on the problem at hand. In this work, we propose an alternative method that enables the metric to be learned in a data-driven fashion, allowing the CGENN network to learn more flexible representations. Specifically, we populate metric matrices fully, ensuring they are symmetric by construction, and leverage eigenvalue decomposition to integrate this additional learnable component into the original CGENN formulation in a principled manner. Additionally, we motivate our method using insights from category theory, which enables us to explain Clifford algebras as a categorical construction and guarantee the mathematical soundness of our approach. We validate our method in various tasks and showcase the advantages of learning more flexible latent metric representations. The code and data are available at https://github.com/rick-ali/Metric-Learning-for-CGENNs
[ "['Riccardo Ali' 'Paulina Kulytė' 'Haitz Sáez de Ocáriz Borde' 'Pietro Liò']" ]
null
null
2407.09930
null
null
http://arxiv.org/pdf/2407.09930v1
2024-07-13T15:53:37Z
2024-07-13T15:53:37Z
Evaluating the Impact of Different Quantum Kernels on the Classification Performance of Support Vector Machine Algorithm: A Medical Dataset Application
The support vector machine algorithm with a quantum kernel estimator (QSVM-Kernel), as a leading example of a quantum machine learning technique, has undergone significant advancements. Nevertheless, its integration with classical data presents unique challenges. While quantum computers primarily interact with data in quantum states, embedding classical data into quantum states using feature mapping techniques is essential for leveraging quantum algorithms Despite the recognized importance of feature mapping, its specific impact on data classification outcomes remains largely unexplored. This study addresses this gap by comprehensively assessing the effects of various feature mapping methods on classification results, taking medical data analysis as a case study. In this study, the QSVM-Kernel method was applied to classification problems in two different and publicly available medical datasets, namely, the Wisconsin Breast Cancer (original) and The Cancer Genome Atlas (TCGA) Glioma datasets. In the QSVM-Kernel algorithm, quantum kernel matrices obtained from 9 different quantum feature maps were used. Thus, the effects of these quantum feature maps on the classification results of the QSVM-Kernel algorithm were examined in terms of both classifier performance and total execution time. As a result, in the Wisconsin Breast Cancer (original) and TCGA Glioma datasets, when Rx and Ry rotational gates were used, respectively, as feature maps in the QSVM-Kernel algorithm, the best classification performances were achieved both in terms of classification performance and total execution time. The contributions of this study are that (1) it highlights the significant impact of feature mapping techniques on medical data classification outcomes using the QSVM-Kernel algorithm, and (2) it also guides undertaking research for improved QSVM classification performance.
[ "['Emine Akpinar' 'Sardar M. N. Islam' 'Murat Oduncuoglu']" ]
null
null
2407.09941
null
null
http://arxiv.org/pdf/2407.09941v1
2024-07-13T16:34:18Z
2024-07-13T16:34:18Z
Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers
A wide array of sequence models are built on a framework modeled after Transformers, comprising alternating sequence mixer and channel mixer layers. This paper studies a unifying matrix mixer view of sequence mixers that can be conceptualized as a linear map on the input sequence. This framework encompasses a broad range of well-known sequence models, including the self-attention of Transformers as well as recent strong alternatives such as structured state space models (SSMs), and allows understanding downstream characteristics such as efficiency and expressivity through properties of their structured matrix class. We identify a key axis of matrix parameterizations termed sequence alignment, which increases the flexibility and performance of matrix mixers, providing insights into the strong performance of Transformers and recent SSMs such as Mamba. Furthermore, the matrix mixer framework offers a systematic approach to developing sequence mixers with desired properties, allowing us to develop several new sub-quadratic sequence models. In particular, we propose a natural bidirectional extension of the Mamba model (Hydra), parameterized as a quasiseparable matrix mixer, which demonstrates superior performance over other sequence models including Transformers on non-causal tasks. As a drop-in replacement for attention layers, Hydra outperforms BERT by 0.8 points on the GLUE benchmark and ViT by 2% Top-1 accuracy on ImageNet.
[ "['Sukjun Hwang' 'Aakash Lahoti' 'Tri Dao' 'Albert Gu']" ]
null
null
2407.09950
null
null
http://arxiv.org/pdf/2407.09950v1
2024-07-13T17:15:23Z
2024-07-13T17:15:23Z
PSO Fuzzy XGBoost Classifier Boosted with Neural Gas Features on EEG Signals in Emotion Recognition
Emotion recognition is the technology-driven process of identifying and categorizing human emotions from various data sources, such as facial expressions, voice patterns, body motion, and physiological signals, such as EEG. These physiological indicators, though rich in data, present challenges due to their complexity and variability, necessitating sophisticated feature selection and extraction methods. NGN, an unsupervised learning algorithm, effectively adapts to input spaces without predefined grid structures, improving feature extraction from physiological data. Furthermore, the incorporation of fuzzy logic enables the handling of fuzzy data by introducing reasoning that mimics human decision-making. The combination of PSO with XGBoost aids in optimizing model performance through efficient hyperparameter tuning and decision process optimization. This study explores the integration of Neural-Gas Network (NGN), XGBoost, Particle Swarm Optimization (PSO), and fuzzy logic to enhance emotion recognition using physiological signals. Our research addresses three critical questions concerning the improvement of XGBoost with PSO and fuzzy logic, NGN's effectiveness in feature selection, and the performance comparison of the PSO-fuzzy XGBoost classifier with standard benchmarks. Acquired results indicate that our methodologies enhance the accuracy of emotion recognition systems and outperform other feature selection techniques using the majority of classifiers, offering significant implications for both theoretical advancement and practical application in emotion recognition technology.
[ "['Seyed Muhammad Hossein Mousavi']" ]
null
null
2407.09972
null
null
http://arxiv.org/pdf/2407.09972v1
2024-07-13T18:31:35Z
2024-07-13T18:31:35Z
Harvesting Private Medical Images in Federated Learning Systems with Crafted Models
Federated learning (FL) allows a set of clients to collaboratively train a machine-learning model without exposing local training samples. In this context, it is considered to be privacy-preserving and hence has been adopted by medical centers to train machine-learning models over private data. However, in this paper, we propose a novel attack named MediLeak that enables a malicious parameter server to recover high-fidelity patient images from the model updates uploaded by the clients. MediLeak requires the server to generate an adversarial model by adding a crafted module in front of the original model architecture. It is published to the clients in the regular FL training process and each client conducts local training on it to generate corresponding model updates. Then, based on the FL protocol, the model updates are sent back to the server and our proposed analytical method recovers private data from the parameter updates of the crafted module. We provide a comprehensive analysis for MediLeak and show that it can successfully break the state-of-the-art cryptographic secure aggregation protocols, designed to protect the FL systems from privacy inference attacks. We implement MediLeak on the MedMNIST and COVIDx CXR-4 datasets. The results show that MediLeak can nearly perfectly recover private images with high recovery rates and quantitative scores. We further perform downstream tasks such as disease classification with the recovered data, where our results show no significant performance degradation compared to using the original training samples.
[ "['Shanghao Shi' 'Md Shahedul Haque' 'Abhijeet Parida'\n 'Marius George Linguraru' 'Y. Thomas Hou' 'Syed Muhammad Anwar'\n 'Wenjing Lou']" ]
null
null
2407.09985
null
null
http://arxiv.org/pdf/2407.09985v1
2024-07-13T19:21:44Z
2024-07-13T19:21:44Z
A Training Data Recipe to Accelerate A* Search with Language Models
Recent works in AI planning have proposed to combine LLMs with iterative tree-search algorithms like A* and MCTS, where LLMs are typically used to calculate the heuristic, guiding the planner towards the goal. However, combining these techniques is not trivial : LM-based heuristics are quite weak, incurring a high computational cost without a significant performance improvement. Existing methods to learn these heuristics do not consider the requirements of the planner, and typically need a lot of compute. Thus, in this work, we propose a distribution to downsample training data by identifying relevant data points to learn a performant heuristic, while constraining computational costs. To arrive at this model, we disentangle the requirements of the planner, in our case A* search, from that of the language model to generalise on this task. Surprisingly, we find an overlap between their requirements; A* requires more accurate predictions on nodes near the goal, and LMs need the same set of nodes for effective generalisation. With these insights, we can quantify the contribution of each node towards accelerating A* search, and subsequently derive a training distribution for learning LM-based heuristics. Following a recent work, we conduct our experiments on two classical planning domains, maze navigation and sokoban, with two test splits per domain, and two conventional loss functions. We reduce the number of iterations required to find the solutions by upto 13x, with a wall-clock speed-up of upto 5x.
[ "['Devaansh Gupta' 'Boyang Li']" ]
null
null
2407.09986
null
null
http://arxiv.org/pdf/2407.09986v1
2024-07-13T19:23:11Z
2024-07-13T19:23:11Z
Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity
Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation. In our study, we explore the impact of various factors on successful learning strategies for this task. Specifically, we investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation. Using model-free Reinforcement Learning, we compare different curricula and two haptic information modalities (No-tactile vs. 3D-force sensing) for lifting and rotating a ball against gravity with a three-fingered simulated robotic hand with no visual input. Note that our best results were obtained when we used a novel curriculum-based learning rate scheduler, which adjusts the linearly-decaying learning rate when the reward is changed as it accelerates convergence to higher rewards. Our findings demonstrate that the choice of curriculum greatly biases the acquisition of different features of dexterous manipulation. Surprisingly, successful learning can be achieved even in the absence of tactile feedback, challenging conventional assumptions about the necessity of haptic information for dexterous manipulation tasks. We demonstrate the generalizability of our results to balls of different weights and sizes, underscoring the robustness of our learning approach. This work, therefore, emphasizes the importance of the choice curriculum and challenges long-held notions about the need for tactile information to autonomously learn in-hand dexterous manipulation.
[ "['Pegah Ojaghi' 'Romina Mir' 'Ali Marjaninejad' 'Andrew Erwin'\n 'Michael Wehner' 'Francisco J Valero-Cueva']" ]
null
null
2407.09994
null
null
http://arxiv.org/pdf/2407.09994v1
2024-07-13T20:17:41Z
2024-07-13T20:17:41Z
Distributed computing for physics-based data-driven reduced modeling at scale: Application to a rotating detonation rocket engine
High-performance computing (HPC) has revolutionized our ability to perform detailed simulations of complex real-world processes. A prominent contemporary example is from aerospace propulsion, where HPC is used for rotating detonation rocket engine (RDRE) simulations in support of the design of next-generation rocket engines; however, these simulations take millions of core hours even on powerful supercomputers, which makes them impractical for engineering tasks like design exploration and risk assessment. Reduced-order models (ROMs) address this limitation by constructing computationally cheap yet sufficiently accurate approximations that serve as surrogates for the high-fidelity model. This paper contributes a new distributed algorithm that achieves fast and scalable construction of predictive physics-based ROMs trained from sparse datasets of extremely large state dimension. The algorithm learns structured physics-based ROMs that approximate the dynamical systems underlying those datasets. This enables model reduction for problems at a scale and complexity that exceeds the capabilities of existing approaches. We demonstrate our algorithm's scalability using up to $2,048$ cores on the Frontera supercomputer at the Texas Advanced Computing Center. We focus on a real-world three-dimensional RDRE for which one millisecond of simulated physical time requires one million core hours on a supercomputer. Using a training dataset of $2,536$ snapshots each of state dimension $76$ million, our distributed algorithm enables the construction of a predictive data-driven reduced model in just $13$ seconds on $2,048$ cores on Frontera.
[ "['Ionut-Gabriel Farcas' 'Rayomand P. Gundevia' 'Ramakanth Munipalli'\n 'Karen E. Willcox']" ]
null
null
2407.10000
null
null
http://arxiv.org/pdf/2407.10000v1
2024-07-13T20:56:34Z
2024-07-13T20:56:34Z
On Characterizing and Mitigating Imbalances in Multi-Instance Partial Label Learning
Multi-Instance Partial Label Learning (MI-PLL) is a weakly-supervised learning setting encompassing partial label learning, latent structural learning, and neurosymbolic learning. Differently from supervised learning, in MI-PLL, the inputs to the classifiers at training-time are tuples of instances $textbf{x}$, while the supervision signal is generated by a function $sigma$ over the gold labels of $textbf{x}$. The gold labels are hidden during training. In this paper, we focus on characterizing and mitigating learning imbalances, i.e., differences in the errors occurring when classifying instances of different classes (aka class-specific risks), under MI-PLL. The phenomenon of learning imbalances has been extensively studied in the context of long-tail learning; however, the nature of MI-PLL introduces new challenges. Our contributions are as follows. From a theoretical perspective, we characterize the learning imbalances by deriving class-specific risk bounds that depend upon the function $sigma$. Our theory reveals that learning imbalances exist in MI-PLL even when the hidden labels are uniformly distributed. On the practical side, we introduce a technique for estimating the marginal of the hidden labels using only MI-PLL data. Then, we introduce algorithms that mitigate imbalances at training- and testing-time, by treating the marginal of the hidden labels as a constraint. The first algorithm relies on a novel linear programming formulation of MI-PLL for pseudo-labeling. The second one adjusts a model's scores based on robust optimal transport. We demonstrate the effectiveness of our techniques using strong neurosymbolic and long-tail learning baselines, discussing also open challenges.
[ "['Kaifu Wang' 'Efthymia Tsamoura' 'Dan Roth']" ]
null
null
2407.10003
null
null
http://arxiv.org/pdf/2407.10003v1
2024-07-13T21:00:41Z
2024-07-13T21:00:41Z
A Dynamic Algorithm for Weighted Submodular Cover Problem
We initiate the study of the submodular cover problem in dynamic setting where the elements of the ground set are inserted and deleted. In the classical submodular cover problem, we are given a monotone submodular function $f : 2^{V} to mathbb{R}^{ge 0}$ and the goal is to obtain a set $S subseteq V$ that minimizes the cost subject to the constraint $f(S) = f(V)$. This is a classical problem in computer science and generalizes the Set Cover problem, 2-Set Cover, and dominating set problem among others. We consider this problem in a dynamic setting where there are updates to our set $V$, in the form of insertions and deletions of elements from a ground set $mathcal{V}$, and the goal is to maintain an approximately optimal solution with low query complexity per update. For this problem, we propose a randomized algorithm that, in expectation, obtains a $(1-O(epsilon), O(epsilon^{-1}))$-bicriteria approximation using polylogarithmic query complexity per update.
[ "['Kiarash Banihashem' 'Samira Goudarzi' 'MohammadTaghi Hajiaghayi'\n 'Peyman Jabbarzade' 'Morteza Monemizadeh']" ]
null
null
2407.10005
null
null
http://arxiv.org/pdf/2407.10005v1
2024-07-13T21:13:55Z
2024-07-13T21:13:55Z
Fine-grained Analysis of In-context Linear Estimation: Data, Architecture, and Beyond
Recent research has shown that Transformers with linear attention are capable of in-context learning (ICL) by implementing a linear estimator through gradient descent steps. However, the existing results on the optimization landscape apply under stylized settings where task and feature vectors are assumed to be IID and the attention weights are fully parameterized. In this work, we develop a stronger characterization of the optimization and generalization landscape of ICL through contributions on architectures, low-rank parameterization, and correlated designs: (1) We study the landscape of 1-layer linear attention and 1-layer H3, a state-space model. Under a suitable correlated design assumption, we prove that both implement 1-step preconditioned gradient descent. We show that thanks to its native convolution filters, H3 also has the advantage of implementing sample weighting and outperforming linear attention in suitable settings. (2) By studying correlated designs, we provide new risk bounds for retrieval augmented generation (RAG) and task-feature alignment which reveal how ICL sample complexity benefits from distributional alignment. (3) We derive the optimal risk for low-rank parameterized attention weights in terms of covariance spectrum. Through this, we also shed light on how LoRA can adapt to a new distribution by capturing the shift between task covariances. Experimental results corroborate our theoretical findings. Overall, this work explores the optimization and risk landscape of ICL in practically meaningful settings and contributes to a more thorough understanding of its mechanics.
[ "['Yingcong Li' 'Ankit Singh Rawat' 'Samet Oymak']" ]
null
null
2407.10011
null
null
http://arxiv.org/pdf/2407.10011v1
2024-07-13T21:35:13Z
2024-07-13T21:35:13Z
Sim-to-Real Domain Adaptation for Deformation Classification
Deformation detection is vital for enabling accurate assessment and prediction of structural changes in materials, ensuring timely and effective interventions to maintain safety and integrity. Automating deformation detection through computer vision is crucial for efficient monitoring, but it faces significant challenges in creating a comprehensive dataset of both deformed and non-deformed objects, which can be difficult to obtain in many scenarios. In this paper, we introduce a novel framework for generating controlled synthetic data that simulates deformed objects. This approach allows for the realistic modeling of object deformations under various conditions. Our framework integrates an intelligent adapter network that facilitates sim-to-real domain adaptation, enhancing classification results without requiring real data from deformed objects. We conduct experiments on domain adaptation and classification tasks and demonstrate that our framework improves sim-to-real classification results compared to simulation baseline.
[ "['Joel Sol' 'Jamil Fayyad' 'Shadi Alijani' 'Homayoun Najjaran']" ]
null
null
2407.10032
null
null
http://arxiv.org/pdf/2407.10032v1
2024-07-14T00:23:51Z
2024-07-14T00:23:51Z
LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid
Large language models (LLMs) have numerous applications across various domains, but their high computational and memory demands pose significant deployment challenges. Weight quantization is an effective technique for reducing the decoding latency and memory requirements of LLMs. Existing approaches primarily aim to maintain the quality of quantized models by preserving outliers in input features, but they still suffer significant quality loss at lower bit widths. Our approach builds on Optimal Brain Quantization (OBQ), an iterative weight-update-based quantization framework. We identify a key limitation of OBQ, specifically that its uniform quantization grid is suboptimal for maintaining model quality, as it introduces large errors to the task loss. To address this, we propose LeanQuant, which learns a loss-error-aware quantization grid by leveraging the inverse diagonal Hessian. Extensive empirical evaluations demonstrate that LeanQuant is both efficient and accurate; it can quantize a 70-billion-parameter model in 6 hours using a single 32GB GPU and performs favorably compared to competitive baselines in the 4-bit, 3-bit, and 2-bit regions.
[ "['Tianyi Zhang' 'Anshumali Shrivastava']" ]
null
null
2407.10042
null
null
http://arxiv.org/pdf/2407.10042v1
2024-07-14T01:52:10Z
2024-07-14T01:52:10Z
Harnessing Feature Clustering For Enhanced Anomaly Detection With Variational Autoencoder And Dynamic Threshold
We introduce an anomaly detection method for multivariate time series data with the aim of identifying critical periods and features influencing extreme climate events like snowmelt in the Arctic. This method leverages the Variational Autoencoder (VAE) integrated with dynamic thresholding and correlation-based feature clustering. This framework enhances the VAE's ability to identify localized dependencies and learn the temporal relationships in climate data, thereby improving the detection of anomalies as demonstrated by its higher F1-score on benchmark datasets. The study's main contributions include the development of a robust anomaly detection method, improving feature representation within VAEs through clustering, and creating a dynamic threshold algorithm for localized anomaly detection. This method offers explainability of climate anomalies across different regions.
[ "['Tolulope Ale' 'Nicole-Jeanne Schlegel' 'Vandana P. Janeja']" ]
null
null
2407.10055
null
null
http://arxiv.org/pdf/2407.10055v1
2024-07-14T02:53:25Z
2024-07-14T02:53:25Z
MKDTI: Predicting drug-target interactions via multiple kernel fusion on graph attention network
Drug-target relationships may now be predicted computationally using bioinformatics data, which is a valuable tool for understanding pharmacological effects, enhancing drug development efficiency, and advancing related research. A number of structure-based, ligand-based and network-based approaches have now emerged. Furthermore, the integration of graph attention networks with intricate drug target studies is an application area of growing interest. In our work, we formulate a model called MKDTI by extracting kernel information from various layer embeddings of a graph attention network. This combination improves the prediction ability with respect to novel drug-target relationships. We first build a drug-target heterogeneous network using heterogeneous data of drugs and targets, and then use a self-enhanced multi-head graph attention network to extract potential features in each layer. Next, we utilize embeddings of each layer to computationally extract kernel matrices and fuse multiple kernel matrices. Finally, we use a Dual Laplacian Regularized Least Squares framework to forecast novel drug-target entity connections. This prediction can be facilitated by integrating the kernel matrix associated with the drug-target. We measured our model's efficacy using AUPR and AUC. Compared to the benchmark algorithms, our model outperforms them in the prediction outcomes. In addition, we conducted an experiment on kernel selection. The results show that the multi-kernel fusion approach combined with the kernel matrix generated by the graph attention network provides complementary insights into the model. The fusion of this information helps to enhance the accuracy of the predictions.
[ "['Yuhuan Zhou' 'Yulin Wu' 'Weiwei Yuan' 'Xuan Wang' 'Junyi Li']" ]
null
null
2407.10070
null
null
http://arxiv.org/pdf/2407.10070v1
2024-07-14T04:11:10Z
2024-07-14T04:11:10Z
Have ASkotch: Fast Methods for Large-scale, Memory-constrained Kernel Ridge Regression
Kernel ridge regression (KRR) is a fundamental computational tool, appearing in problems that range from computational chemistry to health analytics, with a particular interest due to its starring role in Gaussian process regression. However, it is challenging to scale KRR solvers to large datasets: with $n$ training points, a direct solver (i.e., Cholesky decomposition) uses $O(n^2)$ storage and $O(n^3)$ flops. Iterative methods for KRR, such as preconditioned conjugate gradient (PCG), avoid the cubic scaling of direct solvers and often use low-rank preconditioners; a rank $r$ preconditioner uses $O(rn)$ storage and each iteration requires $O(n^2)$ flops. To reduce the storage and iteration complexity of iterative solvers for KRR, we propose ASkotch ($textbf{A}$ccelerated $textbf{s}$calable $textbf{k}$ernel $textbf{o}$p$textbf{t}$imization using block $textbf{c}$oordinate descent with $textbf{H}$essian preconditioning). For a given block size $|b| << n$, each iteration of ASkotch uses $O(r|b| + n)$ storage and $O(n|b|)$ flops, so ASkotch scales better than Cholesky decomposition and PCG. We prove that ASkotch obtains linear convergence to the optimum, with the convergence rate depending on the square roots of the $textit{preconditioned}$ block condition numbers. Furthermore, we solve KRR problems that were considered to be impossibly large while using limited computational resources: we show that ASkotch outperforms PCG methods with respect to generalization error on large-scale KRR (up to $n = 10^8$) and KRR classification tasks (up to $n = 10^7$) while running each of our experiments on $textit{a single 12 GB Titan V GPU}$. Our work opens up the possibility of as-yet-unimagined applications of KRR across a wide range of disciplines.
[ "['Pratik Rathore' 'Zachary Frangella' 'Madeleine Udell']" ]
null
null
2407.10090
null
null
http://arxiv.org/pdf/2407.10090v1
2024-07-14T05:53:18Z
2024-07-14T05:53:18Z
ReactAIvate: A Deep Learning Approach to Predicting Reaction Mechanisms and Unmasking Reactivity Hotspots
A chemical reaction mechanism (CRM) is a sequence of molecular-level events involving bond-breaking/forming processes, generating transient intermediates along the reaction pathway as reactants transform into products. Understanding such mechanisms is crucial for designing and discovering new reactions. One of the currently available methods to probe CRMs is quantum mechanical (QM) computations. The resource-intensive nature of QM methods and the scarcity of mechanism-based datasets motivated us to develop reliable ML models for predicting mechanisms. In this study, we created a comprehensive dataset with seven distinct classes, each representing uniquely characterized elementary steps. Subsequently, we developed an interpretable attention-based GNN that achieved near-unity and 96% accuracy, respectively for reaction step classification and the prediction of reactive atoms in each such step, capturing interactions between the broader reaction context and local active regions. The near-perfect classification enables accurate prediction of both individual events and the entire CRM, mitigating potential drawbacks of Seq2Seq approaches, where a wrongly predicted character leads to incoherent CRM identification. In addition to interpretability, our model adeptly identifies key atom(s) even from out-of-distribution classes. This generalizabilty allows for the inclusion of new reaction types in a modular fashion, thus will be of value to experts for understanding the reactivity of new molecules.
[ "['Ajnabiul Hoque' 'Manajit Das' 'Mayank Baranwal' 'Raghavan B. Sunoj']" ]