categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2407.07124
null
null
http://arxiv.org/pdf/2407.07124v1
2024-07-09T02:47:16Z
2024-07-09T02:47:16Z
FedClust: Tackling Data Heterogeneity in Federated Learning through Weight-Driven Client Clustering
Federated learning (FL) is an emerging distributed machine learning paradigm that enables collaborative training of machine learning models over decentralized devices without exposing their local data. One of the major challenges in FL is the presence of uneven data distributions across client devices, violating the well-known assumption of independent-and-identically-distributed (IID) training samples in conventional machine learning. To address the performance degradation issue incurred by such data heterogeneity, clustered federated learning (CFL) shows its promise by grouping clients into separate learning clusters based on the similarity of their local data distributions. However, state-of-the-art CFL approaches require a large number of communication rounds to learn the distribution similarities during training until the formation of clusters is stabilized. Moreover, some of these algorithms heavily rely on a predefined number of clusters, thus limiting their flexibility and adaptability. In this paper, we propose {em FedClust}, a novel approach for CFL that leverages the correlation between local model weights and the data distribution of clients. {em FedClust} groups clients into clusters in a one-shot manner by measuring the similarity degrees among clients based on the strategically selected partial weights of locally trained models. We conduct extensive experiments on four benchmark datasets with different non-IID data settings. Experimental results demonstrate that {em FedClust} achieves higher model accuracy up to $sim$45% as well as faster convergence with a significantly reduced communication cost up to 2.7$times$ compared to its state-of-the-art counterparts.
[ "['Md Sirajul Islam' 'Simin Javaherian' 'Fei Xu' 'Xu Yuan' 'Li Chen'\n 'Nian-Feng Tzeng']" ]
null
null
2407.07128
null
null
http://arxiv.org/pdf/2407.07128v1
2024-07-09T10:42:19Z
2024-07-09T10:42:19Z
Modularity aided consistent attributed graph clustering via coarsening
Graph clustering is an important unsupervised learning technique for partitioning graphs with attributes and detecting communities. However, current methods struggle to accurately capture true community structures and intra-cluster relations, be computationally efficient, and identify smaller communities. We address these challenges by integrating coarsening and modularity maximization, effectively leveraging both adjacency and node features to enhance clustering accuracy. We propose a loss function incorporating log-determinant, smoothness, and modularity components using a block majorization-minimization technique, resulting in superior clustering outcomes. The method is theoretically consistent under the Degree-Corrected Stochastic Block Model (DC-SBM), ensuring asymptotic error-free performance and complete label recovery. Our provably convergent and time-efficient algorithm seamlessly integrates with graph neural networks (GNNs) and variational graph autoencoders (VGAEs) to learn enhanced node features and deliver exceptional clustering performance. Extensive experiments on benchmark datasets demonstrate its superiority over existing state-of-the-art methods for both attributed and non-attributed graphs.
[ "['Samarth Bhatia' 'Yukti Makhija' 'Manoj Kumar' 'Sandeep Kumar']" ]
null
null
2407.07133
null
null
http://arxiv.org/pdf/2407.07133v1
2024-07-09T12:21:35Z
2024-07-09T12:21:35Z
Neuromimetic metaplasticity for adaptive continual learning
Conventional intelligent systems based on deep neural network (DNN) models encounter challenges in achieving human-like continual learning due to catastrophic forgetting. Here, we propose a metaplasticity model inspired by human working memory, enabling DNNs to perform catastrophic forgetting-free continual learning without any pre- or post-processing. A key aspect of our approach involves implementing distinct types of synapses from stable to flexible, and randomly intermixing them to train synaptic connections with different degrees of flexibility. This strategy allowed the network to successfully learn a continuous stream of information, even under unexpected changes in input length. The model achieved a balanced tradeoff between memory capacity and performance without requiring additional training or structural modifications, dynamically allocating memory resources to retain both old and new information. Furthermore, the model demonstrated robustness against data poisoning attacks by selectively filtering out erroneous memories, leveraging the Hebb repetition effect to reinforce the retention of significant data.
[ "['Suhee Cho' 'Hyeonsu Lee' 'Seungdae Baek' 'Se-Bum Paik']" ]
null
null
2407.07135
null
null
http://arxiv.org/pdf/2407.07135v1
2024-07-09T15:46:39Z
2024-07-09T15:46:39Z
Improving Out-of-Distribution Detection by Combining Existing Post-hoc Methods
Since the seminal paper of Hendrycks et al. arXiv:1610.02136, Post-hoc deep Out-of-Distribution (OOD) detection has expanded rapidly. As a result, practitioners working on safety-critical applications and seeking to improve the robustness of a neural network now have a plethora of methods to choose from. However, no method outperforms every other on every dataset arXiv:2210.07242, so the current best practice is to test all the methods on the datasets at hand. This paper shifts focus from developing new methods to effectively combining existing ones to enhance OOD detection. We propose and compare four different strategies for integrating multiple detection scores into a unified OOD detector, based on techniques such as majority vote, empirical and copulas-based Cumulative Distribution Function modeling, and multivariate quantiles based on optimal transport. We extend common OOD evaluation metrics -- like AUROC and FPR at fixed TPR rates -- to these multi-dimensional OOD detectors, allowing us to evaluate them and compare them with individual methods on extensive benchmarks. Furthermore, we propose a series of guidelines to choose what OOD detectors to combine in more realistic settings, i.e. in the absence of known OOD data, relying on principles drawn from Outlier Exposure arXiv:1812.04606. The code is available at https://github.com/paulnovello/multi-ood.
[ "['Paul Novello' 'Yannick Prudent' 'Joseba Dalmau' 'Corentin Friedrich'\n 'Yann Pequignot']" ]
null
null
2407.07140
null
null
http://arxiv.org/pdf/2407.07140v1
2024-07-09T17:57:07Z
2024-07-09T17:57:07Z
Cardinality-Aware Set Prediction and Top-$k$ Classification
We present a detailed study of cardinality-aware top-$k$ classification, a novel approach that aims to learn an accurate top-$k$ set predictor while maintaining a low cardinality. We introduce a new target loss function tailored to this setting that accounts for both the classification error and the cardinality of the set predicted. To optimize this loss function, we propose two families of surrogate losses: cost-sensitive comp-sum losses and cost-sensitive constrained losses. Minimizing these loss functions leads to new cardinality-aware algorithms that we describe in detail in the case of both top-$k$ and threshold-based classifiers. We establish $H$-consistency bounds for our cardinality-aware surrogate loss functions, thereby providing a strong theoretical foundation for our algorithms. We report the results of extensive experiments on CIFAR-10, CIFAR-100, ImageNet, and SVHN datasets demonstrating the effectiveness and benefits of our cardinality-aware algorithms.
[ "['Corinna Cortes' 'Anqi Mao' 'Christopher Mohri' 'Mehryar Mohri'\n 'Yutao Zhong']" ]
null
null
2407.07179
null
null
http://arxiv.org/pdf/2407.07179v1
2024-07-09T18:47:25Z
2024-07-09T18:47:25Z
TrackFormers: In Search of Transformer-Based Particle Tracking for the High-Luminosity LHC Era
High-Energy Physics experiments are facing a multi-fold data increase with every new iteration. This is certainly the case for the upcoming High-Luminosity LHC upgrade. Such increased data processing requirements forces revisions to almost every step of the data processing pipeline. One such step in need of an overhaul is the task of particle track reconstruction, a.k.a., tracking. A Machine Learning-assisted solution is expected to provide significant improvements, since the most time-consuming step in tracking is the assignment of hits to particles or track candidates. This is the topic of this paper. We take inspiration from large language models. As such, we consider two approaches: the prediction of the next word in a sentence (next hit point in a track), as well as the one-shot prediction of all hits within an event. In an extensive design effort, we have experimented with three models based on the Transformer architecture and one model based on the U-Net architecture, performing track association predictions for collision event hit points. In our evaluation, we consider a spectrum of simple to complex representations of the problem, eliminating designs with lower metrics early on. We report extensive results, covering both prediction accuracy (score) and computational performance. We have made use of the REDVID simulation framework, as well as reductions applied to the TrackML data set, to compose five data sets from simple to complex, for our experiments. The results highlight distinct advantages among different designs in terms of prediction accuracy and computational performance, demonstrating the efficiency of our methodology. Most importantly, the results show the viability of a one-shot encoder-classifier based Transformer solution as a practical approach for the task of tracking.
[ "['Sascha Caron' 'Nadezhda Dobreva' 'Antonio Ferrer Sánchez'\n 'José D. Martín-Guerrero' 'Uraz Odyurt' 'Roberto Ruiz de Austri Bazan'\n 'Zef Wolffs' 'Yue Zhao']" ]
null
null
2407.07218
null
null
http://arxiv.org/pdf/2407.07218v1
2024-07-09T20:28:03Z
2024-07-09T20:28:03Z
Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations
One of the most promising applications of machine learning (ML) in computational physics is to accelerate the solution of partial differential equations (PDEs). The key objective of ML-based PDE solvers is to output a sufficiently accurate solution faster than standard numerical methods, which are used as a baseline comparison. We first perform a systematic review of the ML-for-PDE solving literature. Of articles that use ML to solve a fluid-related PDE and claim to outperform a standard numerical method, we determine that 79% (60/76) compare to a weak baseline. Second, we find evidence that reporting biases, especially outcome reporting bias and publication bias, are widespread. We conclude that ML-for-PDE solving research is overoptimistic: weak baselines lead to overly positive results, while reporting biases lead to underreporting of negative results. To a large extent, these issues appear to be caused by factors similar to those of past reproducibility crises: researcher degrees of freedom and a bias towards positive results. We call for bottom-up cultural changes to minimize biased reporting as well as top-down structural reforms intended to reduce perverse incentives for doing so.
[ "['Nick McGreivy' 'Ammar Hakim']" ]
null
null
2407.07222
null
null
http://arxiv.org/pdf/2407.07222v1
2024-07-09T20:38:01Z
2024-07-09T20:38:01Z
SPINEX-Clustering: Similarity-based Predictions with Explainable Neighbors Exploration for Clustering Problems
This paper presents a novel clustering algorithm from the SPINEX (Similarity-based Predictions with Explainable Neighbors Exploration) algorithmic family. The newly proposed clustering variant leverages the concept of similarity and higher-order interactions across multiple subspaces to group data into clusters. To showcase the merit of SPINEX, a thorough set of benchmarking experiments was carried out against 13 algorithms, namely, Affinity Propagation, Agglomerative, Birch, DBSCAN, Gaussian Mixture, HDBSCAN, K-Means, KMedoids, Mean Shift, MiniBatch K-Means, OPTICS, Spectral Clustering, and Ward Hierarchical. Then, the performance of all algorithms was examined across 51 synthetic and real datasets from various domains, dimensions, and complexities. Furthermore, we present a companion complexity analysis to compare the complexity of SPINEX to that of the aforementioned algorithms. Our results demonstrate that SPINEX can outperform commonly adopted clustering algorithms by ranking within the top-5 best performing algorithms and has moderate complexity. Finally, a demonstration of the explainability capabilities of SPINEX, along with future research needs, is presented.
[ "['MZ Naser' 'Ahmed Naser']" ]
null
null
2407.07225
null
null
http://arxiv.org/pdf/2407.07225v1
2024-07-09T20:44:40Z
2024-07-09T20:44:40Z
ConvNLP: Image-based AI Text Detection
The potentials of Generative-AI technologies like Large Language models (LLMs) to revolutionize education are undermined by ethical considerations around their misuse which worsens the problem of academic dishonesty. LLMs like GPT-4 and Llama 2 are becoming increasingly powerful in generating sophisticated content and answering questions, from writing academic essays to solving complex math problems. Students are relying on these LLMs to complete their assignments and thus compromising academic integrity. Solutions to detect LLM-generated text are compute-intensive and often lack generalization. This paper presents a novel approach for detecting LLM-generated AI-text using a visual representation of word embedding. We have formulated a novel Convolutional Neural Network called ZigZag ResNet, as well as a scheduler for improving generalization, named ZigZag Scheduler. Through extensive evaluation using datasets of text generated by six different state-of-the-art LLMs, our model demonstrates strong intra-domain and inter-domain generalization capabilities. Our best model detects AI-generated text with an impressive average detection rate (over inter- and intra-domain test data) of 88.35%. Through an exhaustive ablation study, our ZigZag ResNet and ZigZag Scheduler provide a performance improvement of nearly 4% over the vanilla ResNet. The end-to-end inference latency of our model is below 2.5ms per sentence. Our solution offers a lightweight, computationally efficient, and faster alternative to existing tools for AI-generated text detection, with better generalization performance. It can help academic institutions in their fight against the misuse of LLMs in academic settings. Through this work, we aim to contribute to safeguarding the principles of academic integrity and ensuring the trustworthiness of student work in the era of advanced LLMs.
[ "['Suriya Prakash Jambunathan' 'Ashwath Shankarnarayan' 'Parijat Dube']" ]
null
null
2407.07235
null
null
http://arxiv.org/pdf/2407.07235v1
2024-07-09T21:19:49Z
2024-07-09T21:19:49Z
Speech After Gender: A Trans-Feminine Perspective on Next Steps for Speech Science and Technology
As experts in voice modification, trans-feminine gender-affirming voice teachers have unique perspectives on voice that confound current understandings of speaker identity. To demonstrate this, we present the Versatile Voice Dataset (VVD), a collection of three speakers modifying their voices along gendered axes. The VVD illustrates that current approaches in speaker modeling, based on categorical notions of gender and a static understanding of vocal texture, fail to account for the flexibility of the vocal tract. Utilizing publicly-available speaker embeddings, we demonstrate that gender classification systems are highly sensitive to voice modification, and speaker verification systems fail to identify voices as coming from the same speaker as voice modification becomes more drastic. As one path towards moving beyond categorical and static notions of speaker identity, we propose modeling individual qualities of vocal texture such as pitch, resonance, and weight.
[ "['Robin Netzorg' 'Alyssa Cote' 'Sumi Koshin' 'Klo Vivienne Garoute'\n 'Gopala Krishna Anumanchipalli']" ]
null
null
2407.07237
null
null
http://arxiv.org/pdf/2407.07237v2
2024-07-15T14:27:14Z
2024-07-09T21:35:19Z
The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models
Quantum Machine Learning (QML) amalgamates quantum computing paradigms with machine learning models, providing significant prospects for solving complex problems. However, with the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance, particularly against reverse engineering, which could expose trained parameters and algorithms of the models. We assume the untrusted quantum cloud provider is an adversary having white-box access to the transpiled user-designed trained QML model during inference. Reverse engineering (RE) to extract the pre-transpiled QML circuit will enable re-transpilation and usage of the model for various hardware with completely different native gate sets and even different qubit technology. Such flexibility may not be obtained from the transpiled circuit which is tied to a particular hardware and qubit technology. The information about the number of parameters, and optimized values can allow further training of the QML model to alter the QML model, tamper with the watermark, and/or embed their own watermark or refine the model for other purposes. In this first effort to investigate the RE of QML circuits, we perform RE and compare the training accuracy of original and reverse-engineered Quantum Neural Networks (QNNs) of various sizes. We note that multi-qubit classifiers can be reverse-engineered under specific conditions with a mean error of order 1e-2 in a reasonable time. We also propose adding dummy fixed parametric gates in the QML models to increase the RE overhead for defense. For instance, adding 2 dummy qubits and 2 layers increases the overhead by ~1.76 times for a classifier with 2 qubits and 3 layers with a performance overhead of less than 9%. We note that RE is a very powerful attack model which warrants further efforts on defenses.
[ "['Archisman Ghosh' 'Swaroop Ghosh']" ]
null
null
2407.07239
null
null
http://arxiv.org/pdf/2407.07239v1
2024-07-09T21:37:36Z
2024-07-09T21:37:36Z
RotRNN: Modelling Long Sequences with Rotations
Linear recurrent models, such as State Space Models (SSMs) and Linear Recurrent Units (LRUs), have recently shown state-of-the-art performance on long sequence modelling benchmarks. Despite their success, they come with a number of drawbacks, most notably their complex initialisation and normalisation schemes. In this work, we address some of these issues by proposing RotRNN -- a linear recurrent model which utilises the convenient properties of rotation matrices. We show that RotRNN provides a simple model with fewer theoretical assumptions than prior works, with a practical implementation that remains faithful to its theoretical derivation, achieving comparable scores to the LRU and SSMs on several long sequence modelling datasets.
[ "['Rares Dolga' 'Kai Biegun' 'Jake Cunningham' 'David Barber']" ]
null
null
2407.07258
null
null
http://arxiv.org/pdf/2407.07258v1
2024-07-09T22:26:42Z
2024-07-09T22:26:42Z
Identification of emotions on Twitter during the 2022 electoral process in Colombia
The study of Twitter as a means for analyzing social phenomena has gained interest in recent years due to the availability of large amounts of data in a relatively spontaneous environment. Within opinion-mining tasks, emotion detection is specially relevant, as it allows for the identification of people's subjective responses to different social events in a more granular way than traditional sentiment analysis based on polarity. In the particular case of political events, the analysis of emotions in social networks can provide valuable information on the perception of candidates, proposals, and other important aspects of the public debate. In spite of this importance, there are few studies on emotion detection in Spanish and, to the best of our knowledge, few resources are public for opinion mining in Colombian Spanish, highlighting the need for generating resources addressing the specific cultural characteristics of this variety. In this work, we present a small corpus of tweets in Spanish related to the 2022 Colombian presidential elections, manually labeled with emotions using a fine-grained taxonomy. We perform classification experiments using supervised state-of-the-art models (BERT models) and compare them with GPT-3.5 in few-shot learning settings. We make our dataset and code publicly available for research purposes.
[ "['Juan Jose Iguaran Fernandez' 'Juan Manuel Perez' 'German Rosati']" ]
null
null
2407.07275
null
null
http://arxiv.org/pdf/2407.07275v1
2024-07-09T23:39:37Z
2024-07-09T23:39:37Z
Remastering Divide and Remaster: A Cinematic Audio Source Separation Dataset with Multilingual Support
Cinematic audio source separation (CASS) is a relatively new subtask of audio source separation, concerned with the separation of a mixture into the dialogue, music, and effects stems. To date, only one publicly available dataset exists for CASS, that is, the Divide and Remaster (DnR) dataset, which is currently at version 2. While DnR v2 has been an incredibly useful resource for CASS, several areas of improvement have been identified, particularly through its use in the 2023 Sound Demixing Challenge. In this work, we develop version 3 of the DnR dataset, addressing issues relating to vocal content in non-dialogue stems, loudness distributions, mastering process, and linguistic diversity. In particular, the dialogue stem of DnR v3 includes speech content from more than 30 languages from multiple families including but not limited to the Germanic, Romance, Indo-Aryan, Dravidian, Malayo-Polynesian, and Bantu families. Benchmark results using the Bandit model indicated that training on multilingual data yields significant generalizability to the model even in languages with low data availability. Even in languages with high data availability, the multilingual model often performs on par or better than dedicated models trained on monolingual CASS datasets.
[ "['Karn N. Watcharasupat' 'Chih-Wei Wu' 'Iroro Orife']" ]
null
null
2407.07277
null
null
http://arxiv.org/pdf/2407.07277v1
2024-07-09T23:52:53Z
2024-07-09T23:52:53Z
Lifestyle-Informed Personalized Blood Biomarker Prediction via Novel Representation Learning
Blood biomarkers are an essential tool for healthcare providers to diagnose, monitor, and treat a wide range of medical conditions. Current reference values and recommended ranges often rely on population-level statistics, which may not adequately account for the influence of inter-individual variability driven by factors such as lifestyle and genetics. In this work, we introduce a novel framework for predicting future blood biomarker values and define personalized references through learned representations from lifestyle data (physical activity and sleep) and blood biomarkers. Our proposed method learns a similarity-based embedding space that captures the complex relationship between biomarkers and lifestyle factors. Using the UK Biobank (257K participants), our results show that our deep-learned embeddings outperform traditional and current state-of-the-art representation learning techniques in predicting clinical diagnosis. Using a subset of UK Biobank of 6440 participants who have follow-up visits, we validate that the inclusion of these embeddings and lifestyle factors directly in blood biomarker models improves the prediction of future lab values from a single lab visit. This personalized modeling approach provides a foundation for developing more accurate risk stratification tools and tailoring preventative care strategies. In clinical settings, this translates to the potential for earlier disease detection, more timely interventions, and ultimately, a shift towards personalized healthcare.
[ "['A. Ali Heydari' 'Naghmeh Rezaei' 'Javier L. Prieto' 'Shwetak N. Patel'\n 'Ahmed A. Metwally']" ]
null
null
2407.07279
null
null
http://arxiv.org/pdf/2407.07279v1
2024-07-10T00:01:56Z
2024-07-10T00:01:56Z
Towards a theory of learning dynamics in deep state space models
State space models (SSMs) have shown remarkable empirical performance on many long sequence modeling tasks, but a theoretical understanding of these models is still lacking. In this work, we study the learning dynamics of linear SSMs to understand how covariance structure in data, latent state size, and initialization affect the evolution of parameters throughout learning with gradient descent. We show that focusing on the learning dynamics in the frequency domain affords analytical solutions under mild assumptions, and we establish a link between one-dimensional SSMs and the dynamics of deep linear feed-forward networks. Finally, we analyze how latent state over-parameterization affects convergence time and describe future work in extending our results to the study of deep SSMs with nonlinear connections. This work is a step toward a theory of learning dynamics in deep state space models.
[ "['Jakub Smékal' 'Jimmy T. H. Smith' 'Michael Kleinman' 'Dan Biderman'\n 'Scott W. Linderman']" ]
null
null
2407.07290
null
null
http://arxiv.org/pdf/2407.07290v1
2024-07-10T00:54:42Z
2024-07-10T00:54:42Z
Causal Discovery-Driven Change Point Detection in Time Series
Change point detection in time series seeks to identify times when the probability distribution of time series changes. It is widely applied in many areas, such as human-activity sensing and medical science. In the context of multivariate time series, this typically involves examining the joint distribution of high-dimensional data: If any one variable changes, the whole time series is assumed to have changed. However, in practical applications, we may be interested only in certain components of the time series, exploring abrupt changes in their distributions in the presence of other time series. Here, assuming an underlying structural causal model that governs the time-series data generation, we address this problem by proposing a two-stage non-parametric algorithm that first learns parts of the causal structure through constraint-based discovery methods. The algorithm then uses conditional relative Pearson divergence estimation to identify the change points. The conditional relative Pearson divergence quantifies the distribution disparity between consecutive segments in the time series, while the causal discovery method enables a focus on the causal mechanism, facilitating access to independent and identically distributed (IID) samples. Theoretically, the typical assumption of samples being IID in conventional change point detection methods can be relaxed based on the Causal Markov Condition. Through experiments on both synthetic and real-world datasets, we validate the correctness and utility of our approach.
[ "['Shanyun Gao' 'Raghavendra Addanki' 'Tong Yu' 'Ryan A. Rossi'\n 'Murat Kocaoglu']" ]
null
null
2407.07291
null
null
http://arxiv.org/pdf/2407.07291v1
2024-07-10T00:55:38Z
2024-07-10T00:55:38Z
Causal Discovery in Semi-Stationary Time Series
Discovering causal relations from observational time series without making the stationary assumption is a significant challenge. In practice, this challenge is common in many areas, such as retail sales, transportation systems, and medical science. Here, we consider this problem for a class of non-stationary time series. The structural causal model (SCM) of this type of time series, called the semi-stationary time series, exhibits that a finite number of different causal mechanisms occur sequentially and periodically across time. This model holds considerable practical utility because it can represent periodicity, including common occurrences such as seasonality and diurnal variation. We propose a constraint-based, non-parametric algorithm for discovering causal relations in this setting. The resulting algorithm, PCMCI$_{Omega}$, can capture the alternating and recurring changes in the causal mechanisms and then identify the underlying causal graph with conditional independence (CI) tests. We show that this algorithm is sound in identifying causal relations on discrete time series. We validate the algorithm with extensive experiments on continuous and discrete simulated data. We also apply our algorithm to a real-world climate dataset.
[ "['Shanyun Gao' 'Raghavendra Addanki' 'Tong Yu' 'Ryan A. Rossi'\n 'Murat Kocaoglu']" ]
null
null
2407.07294
null
null
http://arxiv.org/pdf/2407.07294v1
2024-07-10T01:22:02Z
2024-07-10T01:22:02Z
Analyzing Machine Learning Performance in a Hybrid Quantum Computing and HPC Environment
We explored the possible benefits of integrating quantum simulators in a "hybrid" quantum machine learning (QML) workflow that uses both classical and quantum computations in a high-performance computing (HPC) environment. Here, we used two Oak Ridge Leadership Computing Facility HPC systems, Andes (a commodity-type Linux cluster) and Frontier (an HPE Cray EX supercomputer), along with quantum computing simulators from PennyLane and IBMQ to evaluate a hybrid QML program -- using a "ground up" approach. Using 1 GPU on Frontier, we found ~56% and ~77% speedups when compared to using Frontier's CPU and a local, non-HPC system, respectively. Analyzing performance on a larger dataset using multiple threads, the Frontier GPUs performed ~92% and ~48% faster than the Andes and Frontier CPUs, respectively. More impressively, this is a ~226% speedup over a local, non-HPC system's runtime using the same simulator and number of threads. We hope that this proof of concept will motivate more intensive hybrid QC/HPC scaling studies in the future.
[ "['Samuel T. Bieberich' 'Michael A. Sandoval']" ]
null
null
2407.07311
null
null
http://arxiv.org/pdf/2407.07311v1
2024-07-10T02:11:01Z
2024-07-10T02:11:01Z
ViTime: A Visual Intelligence-Based Foundation Model for Time Series Forecasting
The success of large pretrained models in natural language processing (NLP) and computer vision (CV) has opened new avenues for constructing foundation models for time series forecasting (TSF). Traditional TSF foundation models rely heavily on numerical data fitting. In contrast, the human brain is inherently skilled at processing visual information, prefer predicting future trends by observing visualized sequences. From a biomimetic perspective, utilizing models to directly process numerical sequences might not be the most effective route to achieving Artificial General Intelligence (AGI). This paper proposes ViTime, a novel Visual Intelligence-based foundation model for TSF. ViTime overcomes the limitations of numerical time series data fitting by utilizing visual data processing paradigms and employs a innovative data synthesis method during training, called Real Time Series (RealTS). Experiments on a diverse set of previously unseen forecasting datasets demonstrate that ViTime achieves state-of-the-art zero-shot performance, even surpassing the best individually trained supervised models in some situations. These findings suggest that visual intelligence can significantly enhance time series analysis and forecasting, paving the way for more advanced and versatile models in the field. The code for our framework is accessible at https://github.com/IkeYang/ViTime.
[ "['Luoxiao Yang' 'Yun Wang' 'Xinqi Fan' 'Israel Cohen' 'Yue Zhao'\n 'Zijun Zhang']" ]
null
null
2407.07320
null
null
http://arxiv.org/pdf/2407.07320v1
2024-07-10T02:31:15Z
2024-07-10T02:31:15Z
Flow to Rare Events: An Application of Normalizing Flow in Temporal Importance Sampling for Automated Vehicle Validation
Automated Vehicle (AV) validation based on simulated testing requires unbiased evaluation and high efficiency. One effective solution is to increase the exposure to risky rare events while reweighting the probability measure. However, characterizing the distribution of risky events is particularly challenging due to the paucity of samples and the temporality of continuous scenario variables. To solve it, we devise a method to represent, generate, and reweight the distribution of risky rare events. We decompose the temporal evolution of continuous variables into distribution components based on conditional probability. By introducing the Risk Indicator Function, the distribution of risky rare events is theoretically precipitated out of naturalistic driving distribution. This targeted distribution is practically generated via Normalizing Flow, which achieves exact and tractable probability evaluation of intricate distribution. The rare event distribution is then demonstrated as the advantageous Importance Sampling distribution. We also promote the technique of temporal Importance Sampling. The combined method, named as TrimFlow, is executed to estimate the collision rate of Car-following scenarios as a tentative practice. The results showed that sampling background vehicle maneuvers from rare event distribution could evolve testing scenarios to hazardous states. TrimFlow reduced 86.1% of tests compared to generating testing scenarios according to their exposure in the naturalistic driving environment. In addition, the TrimFlow method is not limited to one specific type of functional scenario.
[ "['Yichun Ye' 'He Zhang' 'Ye Tian' 'Jian Sun']" ]
null
null
2407.07328
null
null
http://arxiv.org/pdf/2407.07328v1
2024-07-10T02:51:35Z
2024-07-10T02:51:35Z
CATP: Context-Aware Trajectory Prediction with Competition Symbiosis
Contextual information is vital for accurate trajectory prediction. For instance, the intricate flying behavior of migratory birds hinges on their analysis of environmental cues such as wind direction and air pressure. However, the diverse and dynamic nature of contextual information renders it an arduous task for AI models to comprehend its impact on trajectories and consequently predict them accurately. To address this issue, we propose a ``manager-worker'' framework to unleash the full potential of contextual information and construct CATP model, an implementation of the framework for Context-Aware Trajectory Prediction. The framework comprises a manager model, several worker models, and a tailored training mechanism inspired by competition symbiosis in nature. Taking CATP as an example, each worker needs to compete against others for training data and develop an advantage in predicting specific moving patterns. The manager learns the workers' performance in different contexts and selects the best one in the given context to predict trajectories, enabling CATP as a whole to operate in a symbiotic manner. We conducted two comparative experiments and an ablation study to quantitatively evaluate the proposed framework and CATP model. The results showed that CATP could outperform SOTA models, and the framework could be generalized to different context-aware tasks.
[ "['Jiang Wu' 'Dongyu Liu' 'Yuchen Lin' 'Yingcai Wu']" ]
null
null
2407.07333
null
null
http://arxiv.org/pdf/2407.07333v1
2024-07-10T03:04:20Z
2024-07-10T03:04:20Z
Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy
Reinforcement learning algorithms typically rely on the assumption that the environment dynamics and value function can be expressed in terms of a Markovian state representation. However, when state information is only partially observable, how can an agent learn such a state representation, and how can it detect when it has found one? We introduce a metric that can accomplish both objectives, without requiring access to--or knowledge of--an underlying, unobservable state space. Our metric, the $lambda$-discrepancy, is the difference between two distinct temporal difference (TD) value estimates, each computed using TD($lambda$) with a different value of $lambda$. Since TD($lambda$=0) makes an implicit Markov assumption and TD($lambda$=1) does not, a discrepancy between these estimates is a potential indicator of a non-Markovian state representation. Indeed, we prove that the $lambda$-discrepancy is exactly zero for all Markov decision processes and almost always non-zero for a broad class of partially observable environments. We also demonstrate empirically that, once detected, minimizing the $lambda$-discrepancy can help with learning a memory function to mitigate the corresponding partial observability. We then train a reinforcement learning agent that simultaneously constructs two recurrent value networks with different $lambda$ parameters and minimizes the difference between them as an auxiliary loss. The approach scales to challenging partially observable domains, where the resulting agent frequently performs significantly better (and never performs worse) than a baseline recurrent agent with only a single value network.
[ "['Cameron Allen' 'Aaron Kirtland' 'Ruo Yu Tao' 'Sam Lobel' 'Daniel Scott'\n 'Nicholas Petrocelli' 'Omer Gottesman' 'Ronald Parr' 'Michael L. Littman'\n 'George Konidaris']" ]
null
null
2407.07338
null
null
http://arxiv.org/pdf/2407.07338v1
2024-07-10T03:20:17Z
2024-07-10T03:20:17Z
Towards Complete Causal Explanation with Expert Knowledge
We study the problem of restricting Markov equivalence classes of maximal ancestral graphs (MAGs) containing certain edge marks, which we refer to as expert knowledge. MAGs forming a Markov equivalence class can be uniquely represented by an essential ancestral graph. We seek to learn the restriction of the essential ancestral graph containing the proposed expert knowledge. Our contributions are several-fold. First, we prove certain properties for the entire Markov equivalence class including a conjecture from Ali et al. (2009). Second, we present three sound graphical orientation rules, two of which generalize previously known rules, for adding expert knowledge to an essential graph. We also show that some orientation rules of Zhang (2008) are not needed for restricting the Markov equivalence class with expert knowledge. We provide an algorithm for including this expert knowledge and show that our algorithm is complete in certain settings i.e., in these settings, the output of our algorithm is a restricted essential ancestral graph. We conjecture this algorithm is complete generally. Outside of our specified settings, we provide an algorithm for checking whether a graph is a restricted essential graph and discuss its runtime. This work can be seen as a generalization of Meek (1995).
[ "['Aparajithan Venkateswaran' 'Emilija Perkovic']" ]
null
null
2407.07346
null
null
http://arxiv.org/pdf/2407.07346v2
2024-07-13T21:29:36Z
2024-07-10T03:52:53Z
INSIGHT: Universal Neural Simulator for Analog Circuits Harnessing Autoregressive Transformers
Analog front-end design heavily relies on specialized human expertise and costly trial-and-error simulations, which motivated many prior works on analog design automation. However, efficient and effective exploration of the vast and complex design space remains constrained by the time-consuming nature of SPICE simulations, making effective design automation a challenging endeavor. In this paper, we introduce INSIGHT, a GPU-powered, technology-agnostic, effective universal neural simulator in the analog front-end design automation loop. INSIGHT accurately predicts the performance metrics of analog circuits across various technologies with just a few microseconds of inference time. Notably, its autoregressive capabilities enable INSIGHT to accurately predict simulation-costly critical transient specifications leveraging less expensive performance metric information. The low cost and high fidelity feature make INSIGHT a good substitute for standard simulators in analog front-end optimization frameworks. INSIGHT is compatible with any optimization framework, facilitating enhanced design space exploration for sample efficiency through sophisticated offline learning and adaptation techniques. Our experiments demonstrate that INSIGHT-M, a model-based batch reinforcement learning sizing framework with INSIGHT as the accurate surrogate, only requires < 20 real-time simulations with 100-1000x lower simulation costs and significant speedup over existing sizing methods.
[ "['Souradip Poddar' 'Youngmin Oh' 'Yao Lai' 'Hanqing Zhu' 'Bosun Hwang'\n 'David Z. Pan']" ]
null
null
2407.07350
null
null
http://arxiv.org/abs/2407.07350v1
2024-07-10T04:03:23Z
2024-07-10T04:03:23Z
Long-Term Fairness in Sequential Multi-Agent Selection with Positive Reinforcement
While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term. In this paper, we examine this hypothesis and its consequences in a setting in which multiple agents are selecting from a common pool of applicants. We propose the Multi-agent Fair-Greedy policy, that balances greedy score maximization and fairness. Under this policy, we prove that the resource pool and the admissions converge to a long-term fairness target set by the agents when the score distributions across the groups in the population are identical. We provide empirical evidence of existence of equilibria under non-identical score distributions through synthetic and adapted real-world datasets. We then sound a cautionary note for more complex applicant pool evolution models, under which uncoordinated behavior by the agents can cause negative reinforcement, leading to a reduction in the fraction of under-represented applicants. Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model, with a number of open issues that remain to be explored by algorithm designers, social scientists, and policymakers.
[ "['Bhagyashree Puranik' 'Ozgur Guldogan' 'Upamanyu Madhow'\n 'Ramtin Pedarsani']" ]
null
null
2407.07357
null
null
http://arxiv.org/pdf/2407.07357v1
2024-07-10T04:28:21Z
2024-07-10T04:28:21Z
A deep graph model for the signed interaction prediction in biological network
In pharmaceutical research, the strategy of drug repurposing accelerates the development of new therapies while reducing R&D costs. Network pharmacology lays the theoretical groundwork for identifying new drug indications, and deep graph models have become essential for their precision in mapping complex biological networks. Our study introduces an advanced graph model that utilizes graph convolutional networks and tensor decomposition to effectively predict signed chemical-gene interactions. This model demonstrates superior predictive performance, especially in handling the polar relations in biological networks. Our research opens new avenues for drug discovery and repurposing, especially in understanding the mechanism of actions of drugs.
[ "['Shuyi Jin' 'Mengji Zhang' 'Meijie Wang' 'Lun Yu']" ]
null
null
2407.07358
null
null
http://arxiv.org/pdf/2407.07358v1
2024-07-10T04:31:50Z
2024-07-10T04:31:50Z
SGM-PINN: Sampling Graphical Models for Faster Training of Physics-Informed Neural Networks
SGM-PINN is a graph-based importance sampling framework to improve the training efficacy of Physics-Informed Neural Networks (PINNs) on parameterized problems. By applying a graph decomposition scheme to an undirected Probabilistic Graphical Model (PGM) built from the training dataset, our method generates node clusters encoding conditional dependence between training samples. Biasing sampling towards more important clusters allows smaller mini-batches and training datasets, improving training speed and accuracy. We additionally fuse an efficient robustness metric with residual losses to determine regions requiring additional sampling. Experiments demonstrate the advantages of the proposed framework, achieving $3times$ faster convergence compared to prior state-of-the-art sampling methods.
[ "['John Anticev' 'Ali Aghdaei' 'Wuxinlin Cheng' 'Zhuo Feng']" ]
null
null
2407.07360
null
null
http://arxiv.org/pdf/2407.07360v1
2024-07-10T04:33:43Z
2024-07-10T04:33:43Z
Towards a text-based quantitative and explainable histopathology image analysis
Recently, vision-language pre-trained models have emerged in computational pathology. Previous works generally focused on the alignment of image-text pairs via the contrastive pre-training paradigm. Such pre-trained models have been applied to pathology image classification in zero-shot learning or transfer learning fashion. Herein, we hypothesize that the pre-trained vision-language models can be utilized for quantitative histopathology image analysis through a simple image-to-text retrieval. To this end, we propose a Text-based Quantitative and Explainable histopathology image analysis, which we call TQx. Given a set of histopathology images, we adopt a pre-trained vision-language model to retrieve a word-of-interest pool. The retrieved words are then used to quantify the histopathology images and generate understandable feature embeddings due to the direct mapping to the text description. To evaluate the proposed method, the text-based embeddings of four histopathology image datasets are utilized to perform clustering and classification tasks. The results demonstrate that TQx is able to quantify and analyze histopathology images that are comparable to the prevalent visual models in computational pathology.
[ "['Anh Tien Nguyen' 'Trinh Thi Le Vuong' 'Jin Tae Kwak']" ]
null
null
2407.07361
null
null
http://arxiv.org/pdf/2407.07361v1
2024-07-10T04:39:56Z
2024-07-10T04:39:56Z
Characterizing Encrypted Application Traffic through Cellular Radio Interface Protocol
Modern applications are end-to-end encrypted to prevent data from being read or secretly modified. 5G tech nology provides ubiquitous access to these applications without compromising the application-specific performance and latency goals. In this paper, we empirically demonstrate that 5G radio communication becomes the side channel to precisely infer the user's applications in real-time. The key idea lies in observing the 5G physical and MAC layer interactions over time that reveal the application's behavior. The MAC layer receives the data from the application and requests the network to assign the radio resource blocks. The network assigns the radio resources as per application requirements, such as priority, Quality of Service (QoS) needs, amount of data to be transmitted, and buffer size. The adversary can passively observe the radio resources to fingerprint the applications. We empirically demonstrate this attack by considering four different categories of applications: online shopping, voice/video conferencing, video streaming, and Over-The-Top (OTT) media platforms. Finally, we have also demonstrated that an attacker can differentiate various types of applications in real-time within each category.
[ "['Md Ruman Islam' 'Raja Hasnain Anwar' 'Spyridon Mastorakis'\n 'Muhammad Taqi Raza']" ]
null
null
2407.07364
null
null
http://arxiv.org/pdf/2407.07364v1
2024-07-10T04:53:26Z
2024-07-10T04:53:26Z
Real-time system optimal traffic routing under uncertainties -- Can physics models boost reinforcement learning?
System optimal traffic routing can mitigate congestion by assigning routes for a portion of vehicles so that the total travel time of all vehicles in the transportation system can be reduced. However, achieving real-time optimal routing poses challenges due to uncertain demands and unknown system dynamics, particularly in expansive transportation networks. While physics model-based methods are sensitive to uncertainties and model mismatches, model-free reinforcement learning struggles with learning inefficiencies and interpretability issues. Our paper presents TransRL, a novel algorithm that integrates reinforcement learning with physics models for enhanced performance, reliability, and interpretability. TransRL begins by establishing a deterministic policy grounded in physics models, from which it learns from and is guided by a differentiable and stochastic teacher policy. During training, TransRL aims to maximize cumulative rewards while minimizing the Kullback Leibler (KL) divergence between the current policy and the teacher policy. This approach enables TransRL to simultaneously leverage interactions with the environment and insights from physics models. We conduct experiments on three transportation networks with up to hundreds of links. The results demonstrate TransRL's superiority over traffic model-based methods for being adaptive and learning from the actual network data. By leveraging the information from physics models, TransRL consistently outperforms state-of-the-art reinforcement learning algorithms such as proximal policy optimization (PPO) and soft actor critic (SAC). Moreover, TransRL's actions exhibit higher reliability and interpretability compared to baseline reinforcement learning approaches like PPO and SAC.
[ "['Zemian Ke' 'Qiling Zou' 'Jiachao Liu' 'Sean Qian']" ]
null
null
2407.07368
null
null
http://arxiv.org/pdf/2407.07368v1
2024-07-10T05:03:48Z
2024-07-10T05:03:48Z
Data-driven Bayesian State Estimation with Compressed Measurement of Model-free Process using Semi-supervised Learning
The research topic is: data-driven Bayesian state estimation with compressed measurement (BSCM) of model-free process, say for a (causal) tracking application. The dimension of the temporal measurement vector is lower than the dimension of the temporal state vector to be estimated. Hence the state estimation problem is an underdetermined inverse problem. The state-space-model (SSM) of the underlying dynamical process is assumed to be unknown and hence, we use the terminology 'model-free process'. In absence of the SSM, we can not employ traditional model-driven methods like Kalman Filter (KF) and Particle Filter (PF) and instead require data-driven methods. We first experimentally show that two existing unsupervised learning-based data-driven methods fail to address the BSCM problem for model-free process; they are data-driven nonlinear state estimation (DANSE) method and deep Markov model (DMM) method. The unsupervised learning uses unlabelled data comprised of only noisy measurements. While DANSE provides a good predictive performance to model the temporal measurement data as time-series, its unsupervised learning lacks a regularization for state estimation. We then investigate use of a semi-supervised learning approach, and develop a semi-supervised learning-based DANSE method, referred to as SemiDANSE. In the semi-supervised learning, we use a limited amount of labelled data along-with a large amount of unlabelled data, and that helps to bring the desired regularization for BSCM problem in the absence of SSM. The labelled data means pairwise measurement-and-state data. Using three chaotic dynamical systems (or processes) with nonlinear SSMs as benchmark, we show that the data-driven SemiDANSE provides competitive performance for BSCM against three SSM-informed methods - a hybrid method called KalmanNet, and two traditional model-driven methods called extended KF and unscented KF.
[ "['Anubhab Ghosh' 'Yonina C. Eldar' 'Saikat Chatterjee']" ]
null
null
2407.07373
null
null
http://arxiv.org/pdf/2407.07373v1
2024-07-10T05:17:55Z
2024-07-10T05:17:55Z
Automatic Extraction of Disease Risk Factors from Medical Publications
We present a novel approach to automating the identification of risk factors for diseases from medical literature, leveraging pre-trained models in the bio-medical domain, while tuning them for the specific task. Faced with the challenges of the diverse and unstructured nature of medical articles, our study introduces a multi-step system to first identify relevant articles, then classify them based on the presence of risk factor discussions and, finally, extract specific risk factor information for a disease through a question-answering model. Our contributions include the development of a comprehensive pipeline for the automated extraction of risk factors and the compilation of several datasets, which can serve as valuable resources for further research in this area. These datasets encompass a wide range of diseases, as well as their associated risk factors, meticulously identified and validated through a fine-grained evaluation scheme. We conducted both automatic and thorough manual evaluation, demonstrating encouraging results. We also highlight the importance of improving models and expanding dataset comprehensiveness to keep pace with the rapidly evolving field of medical research.
[ "['Maxim Rubchinsky' 'Ella Rabinovich' 'Adi Shraibman' 'Netanel Golan'\n 'Tali Sahar' 'Dorit Shweiki']" ]
null
null
2407.07376
null
null
http://arxiv.org/pdf/2407.07376v1
2024-07-10T05:37:02Z
2024-07-10T05:37:02Z
Deep(er) Reconstruction of Imaging Cherenkov Detectors with Swin Transformers and Normalizing Flow Models
Imaging Cherenkov detectors are crucial for particle identification (PID) in nuclear and particle physics experiments. Fast reconstruction algorithms are essential for near real-time alignment, calibration, data quality control, and efficient analysis. At the future Electron-Ion Collider (EIC), the ePIC detector will feature a dual Ring Imaging Cherenkov (dual-RICH) detector in the hadron direction, a Detector of Internally Reflected Cherenkov (DIRC) in the barrel, and a proximity focus RICH in the electron direction. This paper focuses on the DIRC detector, which presents complex hit patterns and is also used for PID of pions and kaons in the GlueX experiment at JLab. We present Deep(er)RICH, an extension of the seminal DeepRICH work, offering improved and faster PID compared to traditional methods and, for the first time, fast and accurate simulation. This advancement addresses a major bottleneck in Cherenkov detector simulations involving photon tracking through complex optical elements. Our results leverage advancements in Vision Transformers, specifically hierarchical Swin Transformer and normalizing flows. These methods enable direct learning from real data and the reconstruction of complex topologies. We conclude by discussing the implications and future extensions of this work, which can offer capabilities for PID for multiple cutting-edge experiments like the future EIC.
[ "['Cristiano Fanelli' 'James Giroux' 'Justin Stevens']" ]
null
null
2407.07392
null
null
http://arxiv.org/pdf/2407.07392v1
2024-07-10T06:32:58Z
2024-07-10T06:32:58Z
Malicious Path Manipulations via Exploitation of Representation Vulnerabilities of Vision-Language Navigation Systems
Building on the unprecedented capabilities of large language models for command understanding and zero-shot recognition of multi-modal vision-language transformers, visual language navigation (VLN) has emerged as an effective way to address multiple fundamental challenges toward a natural language interface to robot navigation. However, such vision-language models are inherently vulnerable due to the lack of semantic meaning of the underlying embedding space. Using a recently developed gradient based optimization procedure, we demonstrate that images can be modified imperceptibly to match the representation of totally different images and unrelated texts for a vision-language model. Building on this, we develop algorithms that can adversarially modify a minimal number of images so that the robot will follow a route of choice for commands that require a number of landmarks. We demonstrate that experimentally using a recently proposed VLN system; for a given navigation command, a robot can be made to follow drastically different routes. We also develop an efficient algorithm to detect such malicious modifications reliably based on the fact that the adversarially modified images have much higher sensitivity to added Gaussian noise than the original images.
[ "['Chashi Mahiul Islam' 'Shaeke Salman' 'Montasir Shams' 'Xiuwen Liu'\n 'Piyush Kumar']" ]
null
null
2407.07410
null
null
http://arxiv.org/pdf/2407.07410v1
2024-07-10T07:12:50Z
2024-07-10T07:12:50Z
Mutual Information calculation on different appearances
Mutual information has many applications in image alignment and matching, mainly due to its ability to measure the statistical dependence between two images, even if the two images are from different modalities (e.g., CT and MRI). It considers not only the pixel intensities of the images but also the spatial relationships between the pixels. In this project, we apply the mutual information formula to image matching, where image A is the moving object and image B is the target object and calculate the mutual information between them to evaluate the similarity between the images. For comparison, we also used entropy and information-gain methods to test the dependency of the images. We also investigated the effect of different environments on the mutual information of the same image and used experiments and plots to demonstrate.
[ "['Jiecheng Liao' 'Junhao Lu' 'Jeff Ji' 'Jiacheng He']" ]
null
null
2407.07421
null
null
http://arxiv.org/abs/2407.07421v1
2024-07-10T07:23:21Z
2024-07-10T07:23:21Z
Federated PCA on Grassmann Manifold for IoT Anomaly Detection
With the proliferation of the Internet of Things (IoT) and the rising interconnectedness of devices, network security faces significant challenges, especially from anomalous activities. While traditional machine learning-based intrusion detection systems (ML-IDS) effectively employ supervised learning methods, they possess limitations such as the requirement for labeled data and challenges with high dimensionality. Recent unsupervised ML-IDS approaches such as AutoEncoders and Generative Adversarial Networks (GAN) offer alternative solutions but pose challenges in deployment onto resource-constrained IoT devices and in interpretability. To address these concerns, this paper proposes a novel federated unsupervised anomaly detection framework, FedPCA, that leverages Principal Component Analysis (PCA) and the Alternating Directions Method Multipliers (ADMM) to learn common representations of distributed non-i.i.d. datasets. Building on the FedPCA framework, we propose two algorithms, FEDPE in Euclidean space and FEDPG on Grassmann manifolds. Our approach enables real-time threat detection and mitigation at the device level, enhancing network resilience while ensuring privacy. Moreover, the proposed algorithms are accompanied by theoretical convergence rates even under a subsampling scheme, a novel result. Experimental results on the UNSW-NB15 and TON-IoT datasets show that our proposed methods offer performance in anomaly detection comparable to nonlinear baselines, while providing significant improvements in communication and memory efficiency, underscoring their potential for securing IoT networks.
[ "['Tung-Anh Nguyen' 'Long Tan Le' 'Tuan Dung Nguyen' 'Wei Bao'\n 'Suranga Seneviratne' 'Choong Seon Hong' 'Nguyen H. Tran']" ]
null
null
2407.07450
null
null
http://arxiv.org/pdf/2407.07450v1
2024-07-10T08:07:55Z
2024-07-10T08:07:55Z
Using Low-Discrepancy Points for Data Compression in Machine Learning: An Experimental Comparison
Low-discrepancy points (also called Quasi-Monte Carlo points) are deterministically and cleverly chosen point sets in the unit cube, which provide an approximation of the uniform distribution. We explore two methods based on such low-discrepancy points to reduce large data sets in order to train neural networks. The first one is the method of Dick and Feischl [4], which relies on digital nets and an averaging procedure. Motivated by our experimental findings, we construct a second method, which again uses digital nets, but Voronoi clustering instead of averaging. Both methods are compared to the supercompress approach of [14], which is a variant of the K-means clustering algorithm. The comparison is done in terms of the compression error for different objective functions and the accuracy of the training of a neural network.
[ "['Simone Göttlich' 'Jacob Heieck' 'Andreas Neuenkirch']" ]
null
null
2407.07454
null
null
http://arxiv.org/pdf/2407.07454v1
2024-07-10T08:16:13Z
2024-07-10T08:16:13Z
CM-DQN: A Value-Based Deep Reinforcement Learning Model to Simulate Confirmation Bias
In human decision-making tasks, individuals learn through trials and prediction errors. When individuals learn the task, some are more influenced by good outcomes, while others weigh bad outcomes more heavily. Such confirmation bias can lead to different learning effects. In this study, we propose a new algorithm in Deep Reinforcement Learning, CM-DQN, which applies the idea of different update strategies for positive or negative prediction errors, to simulate the human decision-making process when the task's states are continuous while the actions are discrete. We test in Lunar Lander environment with confirmatory, disconfirmatory bias and non-biased to observe the learning effects. Moreover, we apply the confirmation model in a multi-armed bandit problem (environment in discrete states and discrete actions), which utilizes the same idea as our proposed algorithm, as a contrast experiment to algorithmically simulate the impact of different confirmation bias in decision-making process. In both experiments, confirmatory bias indicates a better learning effect. Our code can be found here https://github.com/Patrickhshs/CM-DQN.
[ "['Jiacheng Shen' 'Lihan Feng']" ]
null
null
2407.07457
null
null
http://arxiv.org/pdf/2407.07457v2
2024-07-11T06:06:33Z
2024-07-10T08:20:47Z
GLBench: A Comprehensive Benchmark for Graph with Large Language Models
The emergence of large language models (LLMs) has revolutionized the way we interact with graphs, leading to a new paradigm called GraphLLM. Despite the rapid development of GraphLLM methods in recent years, the progress and understanding of this field remain unclear due to the lack of a benchmark with consistent experimental protocols. To bridge this gap, we introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios. GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks. Through extensive experiments on a collection of real-world datasets with consistent data processing and splitting strategies, we have uncovered several key findings. Firstly, GraphLLM methods outperform traditional baselines in supervised settings, with LLM-as-enhancers showing the most robust performance. However, using LLMs as predictors is less effective and often leads to uncontrollable output issues. We also notice that no clear scaling laws exist for current GraphLLM methods. In addition, both structures and semantics are crucial for effective zero-shot transfer, and our proposed simple baseline can even outperform several models tailored for zero-shot scenarios. The data and code of the benchmark can be found at https://github.com/NineAbyss/GLBench.
[ "['Yuhan Li' 'Peisong Wang' 'Xiao Zhu' 'Aochuan Chen' 'Haiyun Jiang'\n 'Deng Cai' 'Victor Wai Kin Chan' 'Jia Li']" ]
null
null
2407.07458
null
null
http://arxiv.org/pdf/2407.07458v1
2024-07-10T08:21:01Z
2024-07-10T08:21:01Z
Machine Learning Assisted Design of mmWave Wireless Transceiver Circuits
As fifth-generation (5G) and upcoming sixth-generation (6G) communications exhibit tremendous demands in providing high data throughput with a relatively low latency, millimeter-wave (mmWave) technologies manifest themselves as the key enabling components to achieve the envisioned performance and tasks. In this context, mmWave integrated circuits (IC) have attracted significant research interests over the past few decades, ranging from individual block design to complex system design. However, the highly nonlinear properties and intricate trade-offs involved render the design of analog or RF circuits a complicated process. The rapid evolution of fabrication technology also results in an increasingly long time allocated in the design process due to more stringent requirements. In this thesis, 28-GHz transceiver circuits are first investigated with detailed schematics and associated performance metrics. In this case, two target systems comprising heterogeneous individual blocks are selected and demonstrated on both the transmitter and receiver sides. Subsequently, some conventional and large-scale machine learning (ML) approaches are integrated into the design pipeline of the chosen systems to predict circuit parameters based on desired specifications, thereby circumventing the typical time-consuming iterations found in traditional methods. Finally, some potential research directions are discussed from the perspectives of circuit design and ML algorithms.
[ "['Xuzhe Zhao']" ]
null
null
2407.07462
null
null
http://arxiv.org/pdf/2407.07462v1
2024-07-10T08:32:26Z
2024-07-10T08:32:26Z
MAN TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions
Autonomous trucking is a promising technology that can greatly impact modern logistics and the environment. Ensuring its safety on public roads is one of the main duties that requires an accurate perception of the environment. To achieve this, machine learning methods rely on large datasets, but to this day, no such datasets are available for autonomous trucks. In this work, we present MAN TruckScenes, the first multimodal dataset for autonomous trucking. MAN TruckScenes allows the research community to come into contact with truck-specific challenges, such as trailer occlusions, novel sensor perspectives, and terminal environments for the first time. It comprises more than 740 scenes of 20 s each within a multitude of different environmental conditions. The sensor set includes 4 cameras, 6 lidar, 6 radar sensors, 2 IMUs, and a high-precision GNSS. The dataset's 3D bounding boxes were manually annotated and carefully reviewed to achieve a high quality standard. Bounding boxes are available for 27 object classes, 15 attributes, and a range of more than 230 m. The scenes are tagged according to 34 distinct scene tags, and all objects are tracked throughout the scene to promote a wide range of applications. Additionally, MAN TruckScenes is the first dataset to provide 4D radar data with 360{deg} coverage and is thereby the largest radar dataset with annotated 3D bounding boxes. Finally, we provide extensive dataset analysis and baseline results. The dataset, development kit and more are available online.
[ "['Felix Fent' 'Fabian Kuttenreich' 'Florian Ruch' 'Farija Rizwin'\n 'Stefan Juergens' 'Lorenz Lechermann' 'Christian Nissler' 'Andrea Perl'\n 'Ulrich Voll' 'Min Yan' 'Markus Lienkamp']" ]
null
null
2407.07482
null
null
http://arxiv.org/pdf/2407.07482v1
2024-07-10T09:13:11Z
2024-07-10T09:13:11Z
Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations
We study the problem of assessing the robustness of counterfactual explanations for deep learning models. We focus on $textit{plausible model shifts}$ altering model parameters and propose a novel framework to reason about the robustness property in this setting. To motivate our solution, we begin by showing for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete. As this (practically) rules out the existence of scalable algorithms for exactly computing robustness, we propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees while preserving scalability. Remarkably, and differently from existing solutions targeting plausible model shifts, our approach does not impose requirements on the network to be analyzed, thus enabling robustness analysis on a wider range of architectures. Experiments on four binary classification datasets indicate that our method improves the state of the art in generating robust explanations, outperforming existing methods on a range of metrics.
[ "['Luca Marzari' 'Francesco Leofante' 'Ferdinando Cicalese'\n 'Alessandro Farinelli']" ]
null
null
2407.07492
null
null
http://arxiv.org/pdf/2407.07492v1
2024-07-10T09:24:50Z
2024-07-10T09:24:50Z
Fine-Grained Classification for Poisonous Fungi Identification with Transfer Learning
FungiCLEF 2024 addresses the fine-grained visual categorization (FGVC) of fungi species, with a focus on identifying poisonous species. This task is challenging due to the size and class imbalance of the dataset, subtle inter-class variations, and significant intra-class variability amongst samples. In this paper, we document our approach in tackling this challenge through the use of ensemble classifier heads on pre-computed image embeddings. Our team (DS@GT) demonstrate that state-of-the-art self-supervised vision models can be utilized as robust feature extractors for downstream application of computer vision tasks without the need for task-specific fine-tuning on the vision backbone. Our approach achieved the best Track 3 score (0.345), accuracy (78.4%) and macro-F1 (0.577) on the private test set in post competition evaluation. Our code is available at https://github.com/dsgt-kaggle-clef/fungiclef-2024.
[ "['Christopher Chiu' 'Maximilian Heil' 'Teresa Kim' 'Anthony Miyaguchi']" ]
null
null
2407.07521
null
null
http://arxiv.org/pdf/2407.07521v1
2024-07-10T10:18:07Z
2024-07-10T10:18:07Z
CHILLI: A data context-aware perturbation method for XAI
The trustworthiness of Machine Learning (ML) models can be difficult to assess, but is critical in high-risk or ethically sensitive applications. Many models are treated as a `black-box' where the reasoning or criteria for a final decision is opaque to the user. To address this, some existing Explainable AI (XAI) approaches approximate model behaviour using perturbed data. However, such methods have been criticised for ignoring feature dependencies, with explanations being based on potentially unrealistic data. We propose a novel framework, CHILLI, for incorporating data context into XAI by generating contextually aware perturbations, which are faithful to the training data of the base model being explained. This is shown to improve both the soundness and accuracy of the explanations.
[ "['Saif Anwar' 'Nathan Griffiths' 'Abhir Bhalerao' 'Thomas Popham']" ]
null
null
2407.07528
null
null
http://arxiv.org/pdf/2407.07528v1
2024-07-10T10:31:57Z
2024-07-10T10:31:57Z
MLRS-PDS: A Meta-learning recommendation of dynamic ensemble selection pipelines
Dynamic Selection (DS), where base classifiers are chosen from a classifier's pool for each new instance at test time, has shown to be highly effective in pattern recognition. However, instability and redundancy in the classifier pools can impede computational efficiency and accuracy in dynamic ensemble selection. This paper introduces a meta-learning recommendation system (MLRS) to recommend the optimal pool generation scheme for DES methods tailored to individual datasets. The system employs a meta-model built from dataset meta-features to predict the most suitable pool generation scheme and DES method for a given dataset. Through an extensive experimental study encompassing 288 datasets, we demonstrate that this meta-learning recommendation system outperforms traditional fixed pool or DES method selection strategies, highlighting the efficacy of a meta-learning approach in refining DES method selection. The source code, datasets, and supplementary results can be found in this project's GitHub repository: https://github.com/Menelau/MLRS-PDS.
[ "['Hesam Jalalian' 'Rafael M. O. Cruz']" ]
null
null
2407.07530
null
null
http://arxiv.org/pdf/2407.07530v1
2024-07-10T10:36:11Z
2024-07-10T10:36:11Z
How Aligned are Different Alignment Metrics?
In recent years, various methods and benchmarks have been proposed to empirically evaluate the alignment of artificial neural networks to human neural and behavioral data. But how aligned are different alignment metrics? To answer this question, we analyze visual data from Brain-Score (Schrimpf et al., 2018), including metrics from the model-vs-human toolbox (Geirhos et al., 2021), together with human feature alignment (Linsley et al., 2018; Fel et al., 2022) and human similarity judgements (Muttenthaler et al., 2022). We find that pairwise correlations between neural scores and behavioral scores are quite low and sometimes even negative. For instance, the average correlation between those 80 models on Brain-Score that were fully evaluated on all 69 alignment metrics we considered is only 0.198. Assuming that all of the employed metrics are sound, this implies that alignment with human perception may best be thought of as a multidimensional concept, with different methods measuring fundamentally different aspects. Our results underline the importance of integrative benchmarking, but also raise questions about how to correctly combine and aggregate individual metrics. Aggregating by taking the arithmetic average, as done in Brain-Score, leads to the overall performance currently being dominated by behavior (95.25% explained variance) while the neural predictivity plays a less important role (only 33.33% explained variance). As a first step towards making sure that different alignment metrics all contribute fairly towards an integrative benchmark score, we therefore conclude by comparing three different aggregation options.
[ "['Jannis Ahlert' 'Thomas Klein' 'Felix Wichmann' 'Robert Geirhos']" ]
null
null
2407.07539
null
null
http://arxiv.org/pdf/2407.07539v1
2024-07-10T10:59:28Z
2024-07-10T10:59:28Z
Machine Unlearning for Medical Imaging
Machine unlearning is the process of removing the impact of a particular set of training samples from a pretrained model. It aims to fulfill the "right to be forgotten", which grants the individuals such as patients the right to reconsider their contribution in models including medical imaging models. In this study, we evaluate the effectiveness (performance) and computational efficiency of different unlearning algorithms in medical imaging domain. Our evaluations demonstrate that the considered unlearning algorithms perform well on the retain set (samples whose influence on the model is allowed to be retained) and forget set (samples whose contribution to the model should be eliminated), and show no bias against male or female samples. They, however, adversely impact the generalization of the model, especially for larger forget set sizes. Moreover, they might be biased against easy or hard samples, and need additional computational overhead for hyper-parameter tuning. In conclusion, machine unlearning seems promising for medical imaging, but the existing unlearning algorithms still needs further improvements to become more practical for medical applications.
[ "['Reza Nasirigerdeh' 'Nader Razmi' 'Julia A. Schnabel' 'Daniel Rueckert'\n 'Georgios Kaissis']" ]
null
null
2407.07560
null
null
http://arxiv.org/pdf/2407.07560v1
2024-07-10T11:35:02Z
2024-07-10T11:35:02Z
Instrumentation and Analysis of Native ML Pipelines via Logical Query Plans
Machine Learning (ML) is increasingly used to automate impactful decisions, which leads to concerns regarding their correctness, reliability, and fairness. We envision highly-automated software platforms to assist data scientists with developing, validating, monitoring, and analysing their ML pipelines. In contrast to existing work, our key idea is to extract "logical query plans" from ML pipeline code relying on popular libraries. Based on these plans, we automatically infer pipeline semantics and instrument and rewrite the ML pipelines to enable diverse use cases without requiring data scientists to manually annotate or rewrite their code. First, we developed such an abstract ML pipeline representation together with machinery to extract it from Python code. Next, we used this representation to efficiently instrument static ML pipelines and apply provenance tracking, which enables lightweight screening for common data preparation issues. Finally, we built machinery to automatically rewrite ML pipelines to perform more advanced what-if analyses and proposed using multi-query optimisation for the resulting workloads. In future work, we aim to interactively assist data scientists as they work on their ML pipelines.
[ "['Stefan Grafberger']" ]
null
null
2407.07575
null
null
http://arxiv.org/pdf/2407.07575v1
2024-07-10T12:08:39Z
2024-07-10T12:08:39Z
Resource Allocation for Twin Maintenance and Computing Task Processing in Digital Twin Vehicular Edge Computing Network
As a promising technology, vehicular edge computing (VEC) can provide computing and caching services by deploying VEC servers near vehicles. However, VEC networks still face challenges such as high vehicle mobility. Digital twin (DT), an emerging technology, can predict, estimate, and analyze real-time states by digitally modeling objects in the physical world. By integrating DT with VEC, a virtual vehicle DT can be created in the VEC server to monitor the real-time operating status of vehicles. However, maintaining the vehicle DT model requires ongoing attention from the VEC server, which also needs to offer computing services for the vehicles. Therefore, effective allocation and scheduling of VEC server resources are crucial. This study focuses on a general VEC network with a single VEC service and multiple vehicles, examining the two types of delays caused by twin maintenance and computational processing within the network. By transforming the problem using satisfaction functions, we propose an optimization problem aimed at maximizing each vehicle's resource utility to determine the optimal resource allocation strategy. Given the non-convex nature of the issue, we employ multi-agent Markov decision processes to reformulate the problem. Subsequently, we propose the twin maintenance and computing task processing resource collaborative scheduling (MADRL-CSTC) algorithm, which leverages multi-agent deep reinforcement learning. Through experimental comparisons with alternative algorithms, it demonstrates that our proposed approach is effective in terms of resource allocation.
[ "['Yu Xie' 'Qiong Wu' 'Pingyi Fan' 'Nan Cheng' 'Wen Chen' 'Jiangzhou Wang'\n 'Khaled B. Letaief']" ]
null
null
2407.07586
null
null
http://arxiv.org/pdf/2407.07586v1
2024-07-10T12:18:38Z
2024-07-10T12:18:38Z
Simplifying Source-Free Domain Adaptation for Object Detection: Effective Self-Training Strategies and Performance Insights
This paper focuses on source-free domain adaptation for object detection in computer vision. This task is challenging and of great practical interest, due to the cost of obtaining annotated data sets for every new domain. Recent research has proposed various solutions for Source-Free Object Detection (SFOD), most being variations of teacher-student architectures with diverse feature alignment, regularization and pseudo-label selection strategies. Our work investigates simpler approaches and their performance compared to more complex SFOD methods in several adaptation scenarios. We highlight the importance of batch normalization layers in the detector backbone, and show that adapting only the batch statistics is a strong baseline for SFOD. We propose a simple extension of a Mean Teacher with strong-weak augmentation in the source-free setting, Source-Free Unbiased Teacher (SF-UT), and show that it actually outperforms most of the previous SFOD methods. Additionally, we showcase that an even simpler strategy consisting in training on a fixed set of pseudo-labels can achieve similar performance to the more complex teacher-student mutual learning, while being computationally efficient and mitigating the major issue of teacher-student collapse. We conduct experiments on several adaptation tasks using benchmark driving datasets including (Foggy)Cityscapes, Sim10k and KITTI, and achieve a notable improvement of 4.7% AP50 on Cityscapes$rightarrow$Foggy-Cityscapes compared with the latest state-of-the-art in SFOD. Source code is available at https://github.com/EPFL-IMOS/simple-SFOD.
[ "['Yan Hao' 'Florent Forest' 'Olga Fink']" ]
null
null
2407.07596
null
null
http://arxiv.org/pdf/2407.07596v1
2024-07-10T12:29:46Z
2024-07-10T12:29:46Z
Learning treatment effects while treating those in need
Many social programs attempt to allocate scarce resources to people with the greatest need. Indeed, public services increasingly use algorithmic risk assessments motivated by this goal. However, targeting the highest-need recipients often conflicts with attempting to evaluate the causal effect of the program as a whole, as the best evaluations would be obtained by randomizing the allocation. We propose a framework to design randomized allocation rules which optimally balance targeting high-need individuals with learning treatment effects, presenting policymakers with a Pareto frontier between the two goals. We give sample complexity guarantees for the policy learning problem and provide a computationally efficient strategy to implement it. We then apply our framework to data from human services in Allegheny County, Pennsylvania. Optimized policies can substantially mitigate the tradeoff between learning and targeting. For example, it is often possible to obtain 90% of the optimal utility in targeting high-need individuals while ensuring that the average treatment effect can be estimated with less than 2 times the samples that a randomized controlled trial would require. Mechanisms for targeting public services often focus on measuring need as accurately as possible. However, our results suggest that algorithmic systems in public services can be most impactful if they incorporate program evaluation as an explicit goal alongside targeting.
[ "['Bryan Wilder' 'Pim Welle']" ]
null
null
2407.07598
null
null
http://arxiv.org/pdf/2407.07598v1
2024-07-10T12:31:53Z
2024-07-10T12:31:53Z
Targeted Augmented Data for Audio Deepfake Detection
The availability of highly convincing audio deepfake generators highlights the need for designing robust audio deepfake detectors. Existing works often rely solely on real and fake data available in the training set, which may lead to overfitting, thereby reducing the robustness to unseen manipulations. To enhance the generalization capabilities of audio deepfake detectors, we propose a novel augmentation method for generating audio pseudo-fakes targeting the decision boundary of the model. Inspired by adversarial attacks, we perturb original real data to synthesize pseudo-fakes with ambiguous prediction probabilities. Comprehensive experiments on two well-known architectures demonstrate that the proposed augmentation contributes to improving the generalization capabilities of these architectures.
[ "['Marcella Astrid' 'Enjie Ghorbel' 'Djamila Aouada']" ]
null
null
2407.07611
null
null
http://arxiv.org/pdf/2407.07611v1
2024-07-10T12:50:43Z
2024-07-10T12:50:43Z
Physics-Informed Geometric Operators to Support Surrogate, Dimension Reduction and Generative Models for Engineering Design
In this work, we propose a set of physics-informed geometric operators (GOs) to enrich the geometric data provided for training surrogate/discriminative models, dimension reduction, and generative models, typically employed for performance prediction, dimension reduction, and creating data-driven parameterisations, respectively. However, as both the input and output streams of these models consist of low-level shape representations, they often fail to capture shape characteristics essential for performance analyses. Therefore, the proposed GOs exploit the differential and integral properties of shapes--accessed through Fourier descriptors, curvature integrals, geometric moments, and their invariants--to infuse high-level intrinsic geometric information and physics into the feature vector used for training, even when employing simple model architectures or low-level parametric descriptions. We showed that for surrogate modelling, along with the inclusion of the notion of physics, GOs enact regularisation to reduce over-fitting and enhance generalisation to new, unseen designs. Furthermore, through extensive experimentation, we demonstrate that for dimension reduction and generative models, incorporating the proposed GOs enriches the training data with compact global and local geometric features. This significantly enhances the quality of the resulting latent space, thereby facilitating the generation of valid and diverse designs. Lastly, we also show that GOs can enable learning parametric sensitivities to a great extent. Consequently, these enhancements accelerate the convergence rate of shape optimisers towards optimal solutions.
[ "['Shahroz Khan' 'Zahid Masood' 'Muhammad Usama' 'Konstantinos Kostas'\n 'Panagiotis Kaklis' 'Wei' 'Chen']" ]
null
null
2407.07612
null
null
http://arxiv.org/pdf/2407.07612v1
2024-07-10T12:50:44Z
2024-07-10T12:50:44Z
Teaching Transformers Causal Reasoning through Axiomatic Training
For text-based AI systems to interact in the real world, causal reasoning is an essential skill. Since interventional data is costly to generate, we study to what extent an agent can learn causal reasoning from passive data. Specifically, we consider an axiomatic training setup where an agent learns from multiple demonstrations of a causal axiom (or rule), rather than incorporating the axiom as an inductive bias or inferring it from data values. A key question is whether the agent would learn to generalize from the axiom demonstrations to new scenarios. For example, if a transformer model is trained on demonstrations of the causal transitivity axiom over small graphs, would it generalize to applying the transitivity axiom over large graphs? Our results, based on a novel axiomatic training scheme, indicate that such generalization is possible. We consider the task of inferring whether a variable causes another variable, given a causal graph structure. We find that a 67 million parameter transformer model, when trained on linear causal chains (along with some noisy variations) can generalize well to new kinds of graphs, including longer causal chains, causal chains with reversed order, and graphs with branching; even when it is not explicitly trained for such settings. Our model performs at par (or even better) than many larger language models such as GPT-4, Gemini Pro, and Phi-3. Overall, our axiomatic training framework provides a new paradigm of learning causal reasoning from passive data that can be used to learn arbitrary axioms, as long as sufficient demonstrations can be generated.
[ "['Aniket Vashishtha' 'Abhinav Kumar' 'Abbavaram Gowtham Reddy'\n 'Vineeth N Balasubramanian' 'Amit Sharma']" ]
null
null
2407.07613
null
null
http://arxiv.org/pdf/2407.07613v1
2024-07-10T12:52:24Z
2024-07-10T12:52:24Z
Probabilistic learning rate scheduler with provable convergence
Learning rate schedulers have shown great success in speeding up the convergence of learning algorithms in practice. However, their convergence to a minimum has not been proven theoretically. This difficulty mainly arises from the fact that, while traditional convergence analysis prescribes to monotonically decreasing (or constant) learning rates, schedulers opt for rates that often increase and decrease through the training epochs. In this work, we aim to bridge the gap by proposing a probabilistic learning rate scheduler (PLRS), that does not conform to the monotonically decreasing condition, with provable convergence guarantees. In addition to providing detailed convergence proofs, we also show experimental results where the proposed PLRS performs competitively as other state-of-the-art learning rate schedulers across a variety of datasets and architectures.
[ "['Dahlia Devapriya' 'Thulasi Tholeti' 'Janani Suresh' 'Sheetal Kalyani']" ]
null
null
2407.07631
null
null
http://arxiv.org/pdf/2407.07631v1
2024-07-10T13:09:52Z
2024-07-10T13:09:52Z
Pessimism Meets Risk: Risk-Sensitive Offline Reinforcement Learning
We study risk-sensitive reinforcement learning (RL), a crucial field due to its ability to enhance decision-making in scenarios where it is essential to manage uncertainty and minimize potential adverse outcomes. Particularly, our work focuses on applying the entropic risk measure to RL problems. While existing literature primarily investigates the online setting, there remains a large gap in understanding how to efficiently derive a near-optimal policy based on this risk measure using only a pre-collected dataset. We center on the linear Markov Decision Process (MDP) setting, a well-regarded theoretical framework that has yet to be examined from a risk-sensitive standpoint. In response, we introduce two provably sample-efficient algorithms. We begin by presenting a risk-sensitive pessimistic value iteration algorithm, offering a tight analysis by leveraging the structure of the risk-sensitive performance measure. To further improve the obtained bounds, we propose another pessimistic algorithm that utilizes variance information and reference-advantage decomposition, effectively improving both the dependence on the space dimension $d$ and the risk-sensitivity factor. To the best of our knowledge, we obtain the first provably efficient risk-sensitive offline RL algorithms.
[ "['Dake Zhang' 'Boxiang Lyu' 'Shuang Qiu' 'Mladen Kolar' 'Tong Zhang']" ]
null
null
2407.07636
null
null
http://arxiv.org/abs/2407.07636v1
2024-07-10T13:16:12Z
2024-07-10T13:16:12Z
MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations
Shared dynamics models are important for capturing the complexity and variability inherent in Human-Robot Interaction (HRI). Therefore, learning such shared dynamics models can enhance coordination and adaptability to enable successful reactive interactions with a human partner. In this work, we propose a novel approach for learning a shared latent space representation for HRIs from demonstrations in a Mixture of Experts fashion for reactively generating robot actions from human observations. We train a Variational Autoencoder (VAE) to learn robot motions regularized using an informative latent space prior that captures the multimodality of the human observations via a Mixture Density Network (MDN). We show how our formulation derives from a Gaussian Mixture Regression formulation that is typically used approaches for learning HRI from demonstrations such as using an HMM/GMM for learning a joint distribution over the actions of the human and the robot. We further incorporate an additional regularization to prevent "mode collapse", a common phenomenon when using latent space mixture models with VAEs. We find that our approach of using an informative MDN prior from human observations for a VAE generates more accurate robot motions compared to previous HMM-based or recurrent approaches of learning shared latent representations, which we validate on various HRI datasets involving interactions such as handshakes, fistbumps, waving, and handovers. Further experiments in a real-world human-to-robot handover scenario show the efficacy of our approach for generating successful interactions with four different human interaction partners.
[ "['Vignesh Prasad' 'Alap Kshirsagar' 'Dorothea Koert' 'Ruth Stock-Homburg'\n 'Jan Peters' 'Georgia Chalvatzaki']" ]
null
null
2407.07639
null
null
http://arxiv.org/pdf/2407.07639v1
2024-07-10T13:20:47Z
2024-07-10T13:20:47Z
Explaining Graph Neural Networks for Node Similarity on Graphs
Similarity search is a fundamental task for exploiting information in various applications dealing with graph data, such as citation networks or knowledge graphs. While this task has been intensively approached from heuristics to graph embeddings and graph neural networks (GNNs), providing explanations for similarity has received less attention. In this work we are concerned with explainable similarity search over graphs, by investigating how GNN-based methods for computing node similarities can be augmented with explanations. Specifically, we evaluate the performance of two prominent approaches towards explanations in GNNs, based on the concepts of mutual information (MI), and gradient-based explanations (GB). We discuss their suitability and empirically validate the properties of their explanations over different popular graph benchmarks. We find that unlike MI explanations, gradient-based explanations have three desirable properties. First, they are actionable: selecting inputs depending on them results in predictable changes in similarity scores. Second, they are consistent: the effect of selecting certain inputs overlaps very little with the effect of discarding them. Third, they can be pruned significantly to obtain sparse explanations that retain the effect on similarity scores.
[ "['Daniel Daza' 'Cuong Xuan Chu' 'Trung-Kien Tran' 'Daria Stepanova'\n 'Michael Cochez' 'Paul Groth']" ]
null
null
2407.07655
null
null
http://arxiv.org/pdf/2407.07655v1
2024-07-10T13:35:04Z
2024-07-10T13:35:04Z
The Selective G-Bispectrum and its Inversion: Applications to G-Invariant Networks
An important problem in signal processing and deep learning is to achieve textit{invariance} to nuisance factors not relevant for the task. Since many of these factors are describable as the action of a group $G$ (e.g. rotations, translations, scalings), we want methods to be $G$-invariant. The $G$-Bispectrum extracts every characteristic of a given signal up to group action: for example, the shape of an object in an image, but not its orientation. Consequently, the $G$-Bispectrum has been incorporated into deep neural network architectures as a computational primitive for $G$-invariancetextemdash akin to a pooling mechanism, but with greater selectivity and robustness. However, the computational cost of the $G$-Bispectrum ($mathcal{O}(|G|^2)$, with $|G|$ the size of the group) has limited its widespread adoption. Here, we show that the $G$-Bispectrum computation contains redundancies that can be reduced into a textit{selective $G$-Bispectrum} with $mathcal{O}(|G|)$ complexity. We prove desirable mathematical properties of the selective $G$-Bispectrum and demonstrate how its integration in neural networks enhances accuracy and robustness compared to traditional approaches, while enjoying considerable speeds-up compared to the full $G$-Bispectrum.
[ "['Simon Mataigne' 'Johan Mathe' 'Sophia Sanborn' 'Christopher Hillar'\n 'Nina Miolane']" ]
null
null
2407.07664
null
null
http://arxiv.org/pdf/2407.07664v1
2024-07-10T13:44:19Z
2024-07-10T13:44:19Z
A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry
Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal.
[ "['Martin Lindström' 'Borja Rodríguez-Gálvez' 'Ragnar Thobaben'\n 'Mikael Skoglund']" ]
null
null
2407.07668
null
null
http://arxiv.org/pdf/2407.07668v1
2024-07-10T13:51:15Z
2024-07-10T13:51:15Z
How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online Continual Learning
Many real-world applications require machine-learning models to be able to deal with non-stationary data distributions and thus learn autonomously over an extended period of time, often in an online setting. One of the main challenges in this scenario is the so-called catastrophic forgetting (CF) for which the learning model tends to focus on the most recent tasks while experiencing predictive degradation on older ones. In the online setting, the most effective solutions employ a fixed-size memory buffer to store old samples used for replay when training on new tasks. Many approaches have been presented to tackle this problem. However, it is not clear how predictive uncertainty information for memory management can be leveraged in the most effective manner and conflicting strategies are proposed to populate the memory. Are the easiest-to-forget or the easiest-to-remember samples more effective in combating CF? Starting from the intuition that predictive uncertainty provides an idea of the samples' location in the decision space, this work presents an in-depth analysis of different uncertainty estimates and strategies for populating the memory. The investigation provides a better understanding of the characteristics data points should have for alleviating CF. Then, we propose an alternative method for estimating predictive uncertainty via the generalised variance induced by the negative log-likelihood. Finally, we demonstrate that the use of predictive uncertainty measures helps in reducing CF in different settings.
[ "['Giuseppe Serra' 'Ben Werner' 'Florian Buettner']" ]
null
null
2407.07670
null
null
http://arxiv.org/pdf/2407.07670v1
2024-07-10T13:58:57Z
2024-07-10T13:58:57Z
Stochastic Gradient Descent for Two-layer Neural Networks
This paper presents a comprehensive study on the convergence rates of the stochastic gradient descent (SGD) algorithm when applied to overparameterized two-layer neural networks. Our approach combines the Neural Tangent Kernel (NTK) approximation with convergence analysis in the Reproducing Kernel Hilbert Space (RKHS) generated by NTK, aiming to provide a deep understanding of the convergence behavior of SGD in overparameterized two-layer neural networks. Our research framework enables us to explore the intricate interplay between kernel methods and optimization processes, shedding light on the optimization dynamics and convergence properties of neural networks. In this study, we establish sharp convergence rates for the last iterate of the SGD algorithm in overparameterized two-layer neural networks. Additionally, we have made significant advancements in relaxing the constraints on the number of neurons, which have been reduced from exponential dependence to polynomial dependence on the sample size or number of iterations. This improvement allows for more flexibility in the design and scaling of neural networks, and will deepen our theoretical understanding of neural network models trained with SGD.
[ "['Dinghao Cao' 'Zheng-Chu Guo' 'Lei Shi']" ]
null
null
2407.07674
null
null
http://arxiv.org/pdf/2407.07674v2
2024-07-12T15:10:53Z
2024-07-10T14:00:20Z
Feasibility Study on Active Learning of Smart Surrogates for Scientific Simulations
High-performance scientific simulations, important for comprehension of complex systems, encounter computational challenges especially when exploring extensive parameter spaces. There has been an increasing interest in developing deep neural networks (DNNs) as surrogate models capable of accelerating the simulations. However, existing approaches for training these DNN surrogates rely on extensive simulation data which are heuristically selected and generated with expensive computation -- a challenge under-explored in the literature. In this paper, we investigate the potential of incorporating active learning into DNN surrogate training. This allows intelligent and objective selection of training simulations, reducing the need to generate extensive simulation data as well as the dependency of the performance of DNN surrogates on pre-defined training simulations. In the problem context of constructing DNN surrogates for diffusion equations with sources, we examine the efficacy of diversity- and uncertainty-based strategies for selecting training simulations, considering two different DNN architecture. The results set the groundwork for developing the high-performance computing infrastructure for Smart Surrogates that supports on-the-fly generation of simulation data steered by active learning strategies to potentially improve the efficiency of scientific simulations.
[ "['Pradeep Bajracharya' 'Javier Quetzalcóatl Toledo-Marín' 'Geoffrey Fox'\n 'Shantenu Jha' 'Linwei Wang']" ]
null
null
2407.07684
null
null
http://arxiv.org/pdf/2407.07684v1
2024-07-10T14:08:27Z
2024-07-10T14:08:27Z
Towards Human-Like Driving: Active Inference in Autonomous Vehicle Control
This paper presents a novel approach to Autonomous Vehicle (AV) control through the application of active inference, a theory derived from neuroscience that conceptualizes the brain as a predictive machine. Traditional autonomous driving systems rely heavily on Modular Pipelines, Imitation Learning, or Reinforcement Learning, each with inherent limitations in adaptability, generalization, and computational efficiency. Active inference addresses these challenges by minimizing prediction error (termed "surprise") through a dynamic model that balances perception and action. Our method integrates active inference with deep learning to manage lateral control in AVs, enabling them to perform lane following maneuvers within a simulated urban environment. We demonstrate that our model, despite its simplicity, effectively learns and generalizes from limited data without extensive retraining, significantly reducing computational demands. The proposed approach not only enhances the adaptability and performance of AVs in dynamic scenarios but also aligns closely with human-like driving behavior, leveraging a generative model to predict and adapt to environmental changes. Results from extensive experiments in the CARLA simulator show promising outcomes, outperforming traditional methods in terms of adaptability and efficiency, thereby advancing the potential of active inference in real-world autonomous driving applications.
[ "['Elahe Delavari' 'John Moore' 'Junho Hong' 'Jaerock Kwon']" ]
null
null
2407.07700
null
null
http://arxiv.org/pdf/2407.07700v1
2024-07-10T14:33:28Z
2024-07-10T14:33:28Z
Split Conformal Prediction under Data Contamination
Conformal prediction is a non-parametric technique for constructing prediction intervals or sets from arbitrary predictive models under the assumption that the data is exchangeable. It is popular as it comes with theoretical guarantees on the marginal coverage of the prediction sets and the split conformal prediction variant has a very low computational cost compared to model training. We study the robustness of split conformal prediction in a data contamination setting, where we assume a small fraction of the calibration scores are drawn from a different distribution than the bulk. We quantify the impact of the corrupted data on the coverage and efficiency of the constructed sets when evaluated on "clean" test points, and verify our results with numerical experiments. Moreover, we propose an adjustment in the classification setting which we call Contamination Robust Conformal Prediction, and verify the efficacy of our approach using both synthetic and real datasets.
[ "['Jase Clarkson' 'Wenkai Xu' 'Mihai Cucuringu' 'Gesine Reinert']" ]
null
null
2407.07705
null
null
http://arxiv.org/pdf/2407.07705v1
2024-06-23T17:25:48Z
2024-06-23T17:25:48Z
Field-Enhanced Filtering in MIMO Learned Volterra Nonlinear Equalisation of Multi-Wavelength Systems
We propose a novel MIMO-WDM Volterra-based nonlinear-equalisation scheme with adaptive time-domain nonlinear stages enhanced by filtering in both the power and optical signal waveforms. This approach efficiently captures the interplay between dispersion and non-linearity in each step, leading to $46%$ complexity reduction for $9times 9$-MIMO operation.
[ "['Nelson Castro' 'Sonia Boscolo' 'Andrew D. Ellis' 'Stylianos Sygletos']" ]
null
null
2407.07708
null
null
http://arxiv.org/pdf/2407.07708v1
2024-06-03T08:20:06Z
2024-06-03T08:20:06Z
Joint Constellation Shaping Using Gradient Descent Approach for MU-MIMO Broadcast Channel
We introduce a learning-based approach to optimize a joint constellation for a multi-user MIMO broadcast channel ($T$ Tx antennas, $K$ users, each with $R$ Rx antennas), with perfect channel knowledge. The aim of the optimizer (MAX-MIN) is to maximize the minimum mutual information between the transmitter and each receiver, under a sum-power constraint. The proposed optimization method do neither impose the transmitter to use superposition coding (SC) or any other linear precoding, nor to use successive interference cancellation (SIC) at the receiver. Instead, the approach designs a joint constellation, optimized such that its projection into the subspace of each receiver $k$, maximizes the minimum mutual information $I(W_k;Y_k)$ between each transmitted binary input $W_k$ and the output signal at the intended receiver $Y_k$. The rates obtained by our method are compared to those achieved with linear precoders.
[ "['Maxime Vaillant' 'Alix Jeannerot' 'Jean-Marie Gorce']" ]
null
null
2407.07712
null
null
http://arxiv.org/pdf/2407.07712v1
2024-07-10T14:44:25Z
2024-07-10T14:44:25Z
Deep-Graph-Sprints: Accelerated Representation Learning in Continuous-Time Dynamic Graphs
Continuous-time dynamic graphs (CTDGs) are essential for modeling interconnected, evolving systems. Traditional methods for extracting knowledge from these graphs often depend on feature engineering or deep learning. Feature engineering is limited by the manual and time-intensive nature of crafting features, while deep learning approaches suffer from high inference latency, making them impractical for real-time applications. This paper introduces Deep-Graph-Sprints (DGS), a novel deep learning architecture designed for efficient representation learning on CTDGs with low-latency inference requirements. We benchmark DGS against state-of-the-art feature engineering and graph neural network methods using five diverse datasets. The results indicate that DGS achieves competitive performance while improving inference speed up to 12x compared to other deep learning approaches on our tested benchmarks. Our method effectively bridges the gap between deep representation learning and low-latency application requirements for CTDGs.
[ "['Ahmad Naser Eddin' 'Jacopo Bono' 'David Aparício' 'Hugo Ferreira'\n 'Pedro Ribeiro' 'Pedro Bizarro']" ]
null
null
2407.07713
null
null
http://arxiv.org/pdf/2407.07713v1
2024-06-09T00:17:33Z
2024-06-09T00:17:33Z
Data-Driven Radio Environment Map Estimation Using Graph Neural Networks
Radio Environment Maps (REMs) are crucial for numerous applications in Telecom. The construction of accurate Radio Environment Maps (REMs) has become an important and challenging topic in recent decades. In this paper, we present a method to estimate REMs using Graph Neural Networks. This approach utilizes both physical cell information and sparse geo-located signal strength measurements to estimate REMs. The method first divides and encodes mobile network coverage areas into a graph. Then, it inputs sparse geo-located signal strength measurements, characterized by Reference Signal Received Power (RSRP) and Reference Signal Received Quality (RSRQ) metrics, into a Graph Neural Network Model to estimate REMs. The proposed architecture inherits the advantages of a Graph Neural Network to capture the spatial dependencies of network-wide coverage in contrast with network Radio Access Network node locations and spatial proximity of known measurements.
[ "['Ali Shibli' 'Tahar Zanouda']" ]
null
null
2407.07719
null
null
http://arxiv.org/pdf/2407.07719v2
2024-07-15T06:54:53Z
2024-06-17T13:09:25Z
Model-based learning for multi-antenna multi-frequency location-to-channel mapping
Years of study of the propagation channel showed a close relation between a location and the associated communication channel response. The use of a neural network to learn the location-to-channel mapping can therefore be envisioned. The Implicit Neural Representation (INR) literature showed that classical neural architecture are biased towards learning low-frequency content, making the location-to-channel mapping learning a non-trivial problem. Indeed, it is well known that this mapping is a function rapidly varying with the location, on the order of the wavelength. This paper leverages the model-based machine learning paradigm to derive a problem-specific neural architecture from a propagation channel model. The resulting architecture efficiently overcomes the spectral-bias issue. It only learns low-frequency sparse correction terms activating a dictionary of high-frequency components. The proposed architecture is evaluated against classical INR architectures on realistic synthetic data, showing much better accuracy. Its mapping learning performance is explained based on the approximated channel model, highlighting the explainability of the model-based machine learning paradigm.
[ "['Baptiste Chatelier' 'Vincent Corlay' 'Matthieu Crussière'\n 'Luc Le Magoarou']" ]
null
null
2407.07726
null
null
http://arxiv.org/pdf/2407.07726v1
2024-07-10T14:57:46Z
2024-07-10T14:57:46Z
PaliGemma: A versatile 3B VLM for transfer
PaliGemma is an open Vision-Language Model (VLM) that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model. It is trained to be a versatile and broadly knowledgeable base model that is effective to transfer. It achieves strong performance on a wide variety of open-world tasks. We evaluate PaliGemma on almost 40 diverse tasks including standard VLM benchmarks, but also more specialized tasks such as remote-sensing and segmentation.
[ "['Lucas Beyer' 'Andreas Steiner' 'André Susano Pinto'\n 'Alexander Kolesnikov' 'Xiao Wang' 'Daniel Salz' 'Maxim Neumann'\n 'Ibrahim Alabdulmohsin' 'Michael Tschannen' 'Emanuele Bugliarello'\n 'Thomas Unterthiner' 'Daniel Keysers' 'Skanda Koppula' 'Fangyu Liu'\n 'Adam Grycner' 'Alexey Gritsenko' 'Neil Houlsby' 'Manoj Kumar'\n 'Keran Rong' 'Julian Eisenschlos' 'Rishabh Kabra' 'Matthias Bauer'\n 'Matko Bošnjak' 'Xi Chen' 'Matthias Minderer' 'Paul Voigtlaender'\n 'Ioana Bica' 'Ivana Balazevic' 'Joan Puigcerver' 'Pinelopi Papalampidi'\n 'Olivier Henaff' 'Xi Xiong' 'Radu Soricut' 'Jeremiah Harmsen'\n 'Xiaohua Zhai']" ]
null
null
2407.07737
null
null
http://arxiv.org/pdf/2407.07737v1
2024-07-10T15:07:58Z
2024-07-10T15:07:58Z
Fine-Tuning Large Language Models with User-Level Differential Privacy
We investigate practical and scalable algorithms for training large language models (LLMs) with user-level differential privacy (DP) in order to provably safeguard all the examples contributed by each user. We study two variants of DP-SGD with: (1) example-level sampling (ELS) and per-example gradient clipping, and (2) user-level sampling (ULS) and per-user gradient clipping. We derive a novel user-level DP accountant that allows us to compute provably tight privacy guarantees for ELS. Using this, we show that while ELS can outperform ULS in specific settings, ULS generally yields better results when each user has a diverse collection of examples. We validate our findings through experiments in synthetic mean estimation and LLM fine-tuning tasks under fixed compute budgets. We find that ULS is significantly better in settings where either (1) strong privacy guarantees are required, or (2) the compute budget is large. Notably, our focus on LLM-compatible training algorithms allows us to scale to models with hundreds of millions of parameters and datasets with hundreds of thousands of users.
[ "['Zachary Charles' 'Arun Ganesh' 'Ryan McKenna' 'H. Brendan McMahan'\n 'Nicole Mitchell' 'Krishna Pillutla' 'Keith Rush']" ]
null
null
2407.07739
null
null
http://arxiv.org/pdf/2407.07739v1
2024-07-05T06:23:01Z
2024-07-05T06:23:01Z
UAV-assisted Unbiased Hierarchical Federated Learning: Performance and Convergence Analysis
The development of the sixth generation (6G) of wireless networks is bound to streamline the transition of computation and learning towards the edge of the network. Hierarchical federated learning (HFL) becomes, therefore, a key paradigm to distribute learning across edge devices to reach global intelligence. In HFL, each edge device trains a local model using its respective data and transmits the updated model parameters to an edge server for local aggregation. The edge server, then, transmits the locally aggregated parameters to a central server for global model aggregation. The unreliability of communication channels at the edge and backhaul links, however, remains a bottleneck in assessing the true benefit of HFL-empowered systems. To this end, this paper proposes an unbiased HFL algorithm for unmanned aerial vehicle (UAV)-assisted wireless networks that counteracts the impact of unreliable channels by adjusting the update weights during local and global aggregations at UAVs and terrestrial base stations (BS), respectively. To best characterize the unreliability of the channels involved in HFL, we adopt tools from stochastic geometry to determine the success probabilities of the local and global model parameter transmissions. Accounting for such metrics in the proposed HFL algorithm aims at removing the bias towards devices with better channel conditions in the context of the considered UAV-assisted network.. The paper further examines the theoretical convergence guarantee of the proposed unbiased UAV-assisted HFL algorithm under adverse channel conditions. One of the developed approach's additional benefits is that it allows for optimizing and designing the system parameters, e.g., the number of UAVs and their corresponding heights. The paper results particularly highlight the effectiveness of the proposed unbiased HFL scheme as compared to conventional FL and HFL algorithms.
[ "['Ruslan Zhagypar' 'Nour Kouzayha' 'Hesham ElSawy' 'Hayssam Dahrouj'\n 'Tareq Y. Al-Naffouri']" ]
null
null
2407.07742
null
null
http://arxiv.org/pdf/2407.07742v1
2024-06-29T02:35:39Z
2024-06-29T02:35:39Z
Science-Informed Deep Learning (ScIDL) With Applications to Wireless Communications
Given the extensive and growing capabilities offered by deep learning (DL), more researchers are turning to DL to address complex challenges in next-generation (xG) communications. However, despite its progress, DL also reveals several limitations that are becoming increasingly evident. One significant issue is its lack of interpretability, which is especially critical for safety-sensitive applications. Another significant consideration is that DL may not comply with the constraints set by physics laws or given security standards, which are essential for reliable DL. Additionally, DL models often struggle outside their training data distributions, which is known as poor generalization. Moreover, there is a scarcity of theoretical guidance on designing DL algorithms. These challenges have prompted the emergence of a burgeoning field known as science-informed DL (ScIDL). ScIDL aims to integrate existing scientific knowledge with DL techniques to develop more powerful algorithms. The core objective of this article is to provide a brief tutorial on ScIDL that illustrates its building blocks and distinguishes it from conventional DL. Furthermore, we discuss both recent applications of ScIDL and potential future research directions in the field of wireless communications.
[ "['Atefeh Termehchi' 'Ekram Hossain' 'Isaac Woungang']" ]
null
null
2407.07765
null
null
http://arxiv.org/pdf/2407.07765v1
2024-07-10T15:43:30Z
2024-07-10T15:43:30Z
Ramsey Theorems for Trees and a General 'Private Learning Implies Online Learning' Theorem
This work continues to investigate the link between differentially private (DP) and online learning. Alon, Livni, Malliaris, and Moran (2019) showed that for binary concept classes, DP learnability of a given class implies that it has a finite Littlestone dimension (equivalently, that it is online learnable). Their proof relies on a model-theoretic result by Hodges (1997), which demonstrates that any binary concept class with a large Littlestone dimension contains a large subclass of thresholds. In a follow-up work, Jung, Kim, and Tewari (2020) extended this proof to multiclass PAC learning with a bounded number of labels. Unfortunately, Hodges's result does not apply in other natural settings such as multiclass PAC learning with an unbounded label space, and PAC learning of partial concept classes. This naturally raises the question of whether DP learnability continues to imply online learnability in more general scenarios: indeed, Alon, Hanneke, Holzman, and Moran (2021) explicitly leave it as an open question in the context of partial concept classes, and the same question is open in the general multiclass setting. In this work, we give a positive answer to these questions showing that for general classification tasks, DP learnability implies online learnability. Our proof reasons directly about Littlestone trees, without relying on thresholds. We achieve this by establishing several Ramsey-type theorems for trees, which might be of independent interest.
[ "['Simone Fioravanti' 'Steve Hanneke' 'Shay Moran' 'Hilla Schefler'\n 'Iska Tsubari']" ]
null
null
2407.07787
null
null
http://arxiv.org/pdf/2407.07787v1
2024-07-10T16:04:08Z
2024-07-10T16:04:08Z
Continuous Control with Coarse-to-fine Reinforcement Learning
Despite recent advances in improving the sample-efficiency of reinforcement learning (RL) algorithms, designing an RL algorithm that can be practically deployed in real-world environments remains a challenge. In this paper, we present Coarse-to-fine Reinforcement Learning (CRL), a framework that trains RL agents to zoom-into a continuous action space in a coarse-to-fine manner, enabling the use of stable, sample-efficient value-based RL algorithms for fine-grained continuous control tasks. Our key idea is to train agents that output actions by iterating the procedure of (i) discretizing the continuous action space into multiple intervals and (ii) selecting the interval with the highest Q-value to further discretize at the next level. We then introduce a concrete, value-based algorithm within the CRL framework called Coarse-to-fine Q-Network (CQN). Our experiments demonstrate that CQN significantly outperforms RL and behavior cloning baselines on 20 sparsely-rewarded RLBench manipulation tasks with a modest number of environment interactions and expert demonstrations. We also show that CQN robustly learns to solve real-world manipulation tasks within a few minutes of online training.
[ "['Younggyo Seo' 'Jafar Uruç' 'Stephen James']" ]
null
null
2407.07788
null
null
http://arxiv.org/pdf/2407.07788v2
2024-07-11T16:26:09Z
2024-07-10T16:04:18Z
BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark
We introduce BiGym, a new benchmark and learning environment for mobile bi-manual demo-driven robotic manipulation. BiGym features 40 diverse tasks set in home environments, ranging from simple target reaching to complex kitchen cleaning. To capture the real-world performance accurately, we provide human-collected demonstrations for each task, reflecting the diverse modalities found in real-world robot trajectories. BiGym supports a variety of observations, including proprioceptive data and visual inputs such as RGB, and depth from 3 camera views. To validate the usability of BiGym, we thoroughly benchmark the state-of-the-art imitation learning algorithms and demo-driven reinforcement learning algorithms within the environment and discuss the future opportunities.
[ "['Nikita Chernyadev' 'Nicholas Backshall' 'Xiao Ma' 'Yunfan Lu'\n 'Younggyo Seo' 'Stephen James']" ]
null
null
2407.07794
null
null
http://arxiv.org/pdf/2407.07794v1
2024-07-10T16:12:09Z
2024-07-10T16:12:09Z
Reinforcement Learning of Adaptive Acquisition Policies for Inverse Problems
A promising way to mitigate the expensive process of obtaining a high-dimensional signal is to acquire a limited number of low-dimensional measurements and solve an under-determined inverse problem by utilizing the structural prior about the signal. In this paper, we focus on adaptive acquisition schemes to save further the number of measurements. To this end, we propose a reinforcement learning-based approach that sequentially collects measurements to better recover the underlying signal by acquiring fewer measurements. Our approach applies to general inverse problems with continuous action spaces and jointly learns the recovery algorithm. Using insights obtained from theoretical analysis, we also provide a probabilistic design for our methods using variational formulation. We evaluate our approach on multiple datasets and with two measurement spaces (Gaussian, Radon). Our results confirm the benefits of adaptive strategies in low-acquisition horizon settings.
[ "['Gianluigi Silvestri' 'Fabio Valerio Massoli' 'Tribhuvanesh Orekondy'\n 'Afshin Abdi' 'Arash Behboodi']" ]
null
null
2407.07796
null
null
http://arxiv.org/pdf/2407.07796v2
2024-07-11T03:46:35Z
2024-07-10T16:14:34Z
Evaluating Large Language Models with Grid-Based Game Competitions: An Extensible LLM Benchmark and Leaderboard
We introduce a novel and extensible benchmark for large language models (LLMs) through grid-based games such as Tic-Tac-Toe, Connect Four, and Gomoku. The open-source game simulation code, available on GitHub, allows LLMs to compete and generates detailed data files in JSON, CSV, TXT, and PNG formats for leaderboard rankings and further analysis. We present the results of games among leading LLMs, including Claude 3.5 Sonnet and Claude 3 Sonnet by Anthropic, Gemini 1.5 Pro and Gemini 1.5 Flash by Google, GPT-4 Turbo and GPT-4o by OpenAI, and Llama3-70B by Meta. We also encourage submissions of results from other LLMs. In total, we simulated 2,310 matches (5 sessions for each pair among 7 LLMs and a random player) across three types of games, using three distinct prompt types: list, illustration, and image. The results revealed significant variations in LLM performance across different games and prompt types, with analysis covering win and disqualification rates, missed opportunity analysis, and invalid move analysis. The details of the leaderboard and result matrix data are available as open-access data on GitHub. This study enhances our understanding of LLMs' capabilities in playing games they were not specifically trained for, helping to assess their rule comprehension and strategic thinking. On the path to Artificial General Intelligence (AGI), this study lays the groundwork for future exploration into their utility in complex decision-making scenarios, illuminating their strategic thinking abilities and offering directions for further inquiry into the limits of LLMs within game-based frameworks.
[ "['Oguzhan Topsakal' 'Colby Jacob Edell' 'Jackson Bailey Harper']" ]
null
null
2407.07801
null
null
http://arxiv.org/pdf/2407.07801v2
2024-07-11T02:38:14Z
2024-07-10T16:17:49Z
AVCap: Leveraging Audio-Visual Features as Text Tokens for Captioning
In recent years, advancements in representation learning and language models have propelled Automated Captioning (AC) to new heights, enabling the generation of human-level descriptions. Leveraging these advancements, we propose AVCap, an Audio-Visual Captioning framework, a simple yet powerful baseline approach applicable to audio-visual captioning. AVCap utilizes audio-visual features as text tokens, which has many advantages not only in performance but also in the extensibility and scalability of the model. AVCap is designed around three pivotal dimensions: the exploration of optimal audio-visual encoder architectures, the adaptation of pre-trained models according to the characteristics of generated text, and the investigation into the efficacy of modality fusion in captioning. Our method outperforms existing audio-visual captioning methods across all metrics and the code is available on https://github.com/JongSuk1/AVCap
[ "['Jongsuk Kim' 'Jiwon Shin' 'Junmo Kim']" ]
null
null
2407.07802
null
null
http://arxiv.org/pdf/2407.07802v1
2024-07-10T16:20:53Z
2024-07-10T16:20:53Z
ROSA: Random Subspace Adaptation for Efficient Fine-Tuning
Model training requires significantly more memory, compared with inference. Parameter efficient fine-tuning (PEFT) methods provide a means of adapting large models to downstream tasks using less memory. However, existing methods such as adapters, prompt tuning or low-rank adaptation (LoRA) either introduce latency overhead at inference time or achieve subpar downstream performance compared with full fine-tuning. In this work we propose Random Subspace Adaptation (ROSA), a method that outperforms previous PEFT methods by a significant margin, while maintaining a zero latency overhead during inference time. In contrast to previous methods, ROSA is able to adapt subspaces of arbitrarily large dimension, better approximating full-finetuning. We demonstrate both theoretically and experimentally that this makes ROSA strictly more expressive than LoRA, without consuming additional memory during runtime. As PEFT methods are especially useful in the natural language processing domain, where models operate on scales that make full fine-tuning very expensive, we evaluate ROSA in two common NLP scenarios: natural language generation (NLG) and natural language understanding (NLU) with GPT-2 and RoBERTa, respectively. We show that on almost every GLUE task ROSA outperforms LoRA by a significant margin, while also outperforming LoRA on NLG tasks. Our code is available at https://github.com/rosa-paper/rosa
[ "['Marawan Gamal Abdel Hameed' 'Aristides Milios' 'Siva Reddy'\n 'Guillaume Rabusseau']" ]
null
null
2407.07810
null
null
http://arxiv.org/pdf/2407.07810v1
2024-07-10T16:30:27Z
2024-07-10T16:30:27Z
Transformer Alignment in Large Language Models
Large Language Models (LLMs) have made significant strides in natural language processing, and a precise understanding of the internal mechanisms driving their success is essential. We regard LLMs as transforming embeddings via a discrete, coupled, nonlinear, dynamical system in high dimensions. This perspective motivates tracing the trajectories of individual tokens as they pass through transformer blocks, and linearizing the system along these trajectories through their Jacobian matrices. In our analysis of 38 openly available LLMs, we uncover the alignment of top left and right singular vectors of Residual Jacobians, as well as the emergence of linearity and layer-wise exponential growth. Notably, we discover that increased alignment $textit{positively correlates}$ with model performance. Metrics evaluated post-training show significant improvement in comparison to measurements made with randomly initialized weights, highlighting the significant effects of training in transformers. These findings reveal a remarkable level of regularity that has previously been overlooked, reinforcing the dynamical interpretation and paving the way for deeper understanding and optimization of LLM architectures.
[ "['Murdock Aubry' 'Haoming Meng' 'Anton Sugolov' 'Vardan Papyan']" ]
null
null
2407.07818
null
null
http://arxiv.org/pdf/2407.07818v1
2024-07-10T16:43:14Z
2024-07-10T16:43:14Z
The Misclassification Likelihood Matrix: Some Classes Are More Likely To Be Misclassified Than Others
This study introduces the Misclassification Likelihood Matrix (MLM) as a novel tool for quantifying the reliability of neural network predictions under distribution shifts. The MLM is obtained by leveraging softmax outputs and clustering techniques to measure the distances between the predictions of a trained neural network and class centroids. By analyzing these distances, the MLM provides a comprehensive view of the model's misclassification tendencies, enabling decision-makers to identify the most common and critical sources of errors. The MLM allows for the prioritization of model improvements and the establishment of decision thresholds based on acceptable risk levels. The approach is evaluated on the MNIST dataset using a Convolutional Neural Network (CNN) and a perturbed version of the dataset to simulate distribution shifts. The results demonstrate the effectiveness of the MLM in assessing the reliability of predictions and highlight its potential in enhancing the interpretability and risk mitigation capabilities of neural networks. The implications of this work extend beyond image classification, with ongoing applications in autonomous systems, such as self-driving cars, to improve the safety and reliability of decision-making in complex, real-world environments.
[ "['Daniel Sikar' 'Artur Garcez' 'Robin Bloomfield' 'Tillman Weyde'\n 'Kaleem Peeroo' 'Naman Singh' 'Maeve Hutchinson' 'Mirela Reljan-Delaney']" ]
null
null
2407.07821
null
null
http://arxiv.org/pdf/2407.07821v1
2024-07-10T16:45:52Z
2024-07-10T16:45:52Z
When to Accept Automated Predictions and When to Defer to Human Judgment?
Ensuring the reliability and safety of automated decision-making is crucial. It is well-known that data distribution shifts in machine learning can produce unreliable outcomes. This paper proposes a new approach for measuring the reliability of predictions under distribution shifts. We analyze how the outputs of a trained neural network change using clustering to measure distances between outputs and class centroids. We propose this distance as a metric to evaluate the confidence of predictions under distribution shifts. We assign each prediction to a cluster with centroid representing the mean softmax output for all correct predictions of a given class. We then define a safety threshold for a class as the smallest distance from an incorrect prediction to the given class centroid. We evaluate the approach on the MNIST and CIFAR-10 datasets using a Convolutional Neural Network and a Vision Transformer, respectively. The results show that our approach is consistent across these data sets and network models, and indicate that the proposed metric can offer an efficient way of determining when automated predictions are acceptable and when they should be deferred to human operators given a distribution shift.
[ "['Daniel Sikar' 'Artur Garcez' 'Tillman Weyde' 'Robin Bloomfield'\n 'Kaleem Peeroo']" ]
null
null
2407.07827
null
null
http://arxiv.org/pdf/2407.07827v1
2024-07-10T16:50:59Z
2024-07-10T16:50:59Z
Estimating the stability number of a random graph using convolutional neural networks
Graph combinatorial optimization problems are widely applicable and notoriously difficult to compute; for example, consider the traveling salesman or facility location problems. In this paper, we explore the feasibility of using convolutional neural networks (CNNs) on graph images to predict the cardinality of combinatorial properties of random graphs and networks. Specifically, we use image representations of modified adjacency matrices of random graphs as training samples for a CNN model to predict the stability number of random graphs; where the stability number is the cardinality of a maximum set of vertices containing no pairwise adjacency. Our approach demonstrates the potential for applying deep learning in combinatorial optimization problems.
[ "['Randy Davila']" ]
null
null
2407.07829
null
null
http://arxiv.org/pdf/2407.07829v1
2024-07-10T16:51:32Z
2024-07-10T16:51:32Z
Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap
Learning disentangled representations in an unsupervised manner is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. While remarkably difficult to solve in general, recent works have shown that disentanglement is provably achievable under additional assumptions that can leverage geometrical constraints, such as local isometry. To use these insights, we propose a novel perspective on disentangled representation learning built on quadratic optimal transport. Specifically, we formulate the problem in the Gromov-Monge setting, which seeks isometric mappings between distributions supported on different spaces. We propose the Gromov-Monge-Gap (GMG), a regularizer that quantifies the geometry-preservation of an arbitrary push-forward map between two distributions supported on different spaces. We demonstrate the effectiveness of GMG regularization for disentanglement on four standard benchmarks. Moreover, we show that geometry preservation can even encourage unsupervised disentanglement without the standard reconstruction objective - making the underlying model decoder-free, and promising a more practically viable and scalable perspective on unsupervised disentanglement.
[ "['Théo Uscidda' 'Luca Eyring' 'Karsten Roth' 'Fabian Theis' 'Zeynep Akata'\n 'Marco Cuturi']" ]
null
null
2407.07848
null
null
http://arxiv.org/pdf/2407.07848v1
2024-07-10T17:10:10Z
2024-07-10T17:10:10Z
Uncovering Layer-Dependent Activation Sparsity Patterns in ReLU Transformers
Previous work has demonstrated that MLPs within ReLU Transformers exhibit high levels of sparsity, with many of their activations equal to zero for any given token. We build on that work to more deeply explore how token-level sparsity evolves over the course of training, and how it connects to broader sparsity patterns over the course of a sequence or batch, demonstrating that the different layers within small transformers exhibit distinctly layer-specific patterns on both of these fronts. In particular, we demonstrate that the first and last layer of the network have distinctive and in many ways inverted relationships to sparsity, and explore implications for the structure of feature representations being learned at different depths of the model. We additionally explore the phenomenon of ReLU dimensions "turning off", and show evidence suggesting that "neuron death" is being primarily driven by the dynamics of training, rather than simply occurring randomly or accidentally as a result of outliers.
[ "['Cody Wild' 'Jesper Anderson']" ]
null
null
2407.07852
null
null
http://arxiv.org/pdf/2407.07852v1
2024-07-10T17:13:17Z
2024-07-10T17:13:17Z
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
OpenDiLoCo is an open-source implementation and replication of the Distributed Low-Communication (DiLoCo) training method for large language models. We provide a reproducible implementation of the DiLoCo experiments, offering it within a scalable, decentralized training framework using the Hivemind library. We demonstrate its effectiveness by training a model across two continents and three countries, while maintaining 90-95% compute utilization. Additionally, we conduct ablations studies focusing on the algorithm's compute efficiency, scalability in the number of workers and show that its gradients can be all-reduced using FP16 without any performance degradation. Furthermore, we scale OpenDiLoCo to 3x the size of the original work, demonstrating its effectiveness for billion parameter models.
[ "['Sami Jaghouar' 'Jack Min Ong' 'Johannes Hagemann']" ]
null
null
2407.07858
null
null
http://arxiv.org/pdf/2407.07858v1
2024-07-10T17:20:59Z
2024-07-10T17:20:59Z
FACTS About Building Retrieval Augmented Generation-based Chatbots
Enterprise chatbots, powered by generative AI, are emerging as key applications to enhance employee productivity. Retrieval Augmented Generation (RAG), Large Language Models (LLMs), and orchestration frameworks like Langchain and Llamaindex are crucial for building these chatbots. However, creating effective enterprise chatbots is challenging and requires meticulous RAG pipeline engineering. This includes fine-tuning embeddings and LLMs, extracting documents from vector databases, rephrasing queries, reranking results, designing prompts, honoring document access controls, providing concise responses, including references, safeguarding personal information, and building orchestration agents. We present a framework for building RAG-based chatbots based on our experience with three NVIDIA chatbots: for IT/HR benefits, financial earnings, and general content. Our contributions are three-fold: introducing the FACTS framework (Freshness, Architectures, Cost, Testing, Security), presenting fifteen RAG pipeline control points, and providing empirical results on accuracy-latency tradeoffs between large and small LLMs. To the best of our knowledge, this is the first paper of its kind that provides a holistic view of the factors as well as solutions for building secure enterprise-grade chatbots."
[ "['Rama Akkiraju' 'Anbang Xu' 'Deepak Bora' 'Tan Yu' 'Lu An' 'Vishal Seth'\n 'Aaditya Shukla' 'Pritam Gundecha' 'Hridhay Mehta' 'Ashwin Jha'\n 'Prithvi Raj' 'Abhinav Balasubramanian' 'Murali Maram' 'Guru Muthusamy'\n 'Shivakesh Reddy Annepally' 'Sidney Knowles' 'Min Du' 'Nick Burnett'\n 'Sean Javiya' 'Ashok Marannan' 'Mamta Kumari' 'Surbhi Jha'\n 'Ethan Dereszenski' 'Anupam Chakraborty' 'Subhash Ranjan' 'Amina Terfai'\n 'Anoop Surya' 'Tracey Mercer' 'Vinodh Kumar Thanigachalam' 'Tamar Bar'\n 'Sanjana Krishnan' 'Samy Kilaru' 'Jasmine Jaksic' 'Nave Algarici'\n 'Jacob Liberman' 'Joey Conway' 'Sonu Nayyar' 'Justin Boitano']" ]
null
null
2407.07868
null
null
http://arxiv.org/pdf/2407.07868v1
2024-07-10T17:32:05Z
2024-07-10T17:32:05Z
Green Screen Augmentation Enables Scene Generalisation in Robotic Manipulation
Generalising vision-based manipulation policies to novel environments remains a challenging area with limited exploration. Current practices involve collecting data in one location, training imitation learning or reinforcement learning policies with this data, and deploying the policy in the same location. However, this approach lacks scalability as it necessitates data collection in multiple locations for each task. This paper proposes a novel approach where data is collected in a location predominantly featuring green screens. We introduce Green-screen Augmentation (GreenAug), employing a chroma key algorithm to overlay background textures onto a green screen. Through extensive real-world empirical studies with over 850 training demonstrations and 8.2k evaluation episodes, we demonstrate that GreenAug surpasses no augmentation, standard computer vision augmentation, and prior generative augmentation methods in performance. While no algorithmic novelties are claimed, our paper advocates for a fundamental shift in data collection practices. We propose that real-world demonstrations in future research should utilise green screens, followed by the application of GreenAug. We believe GreenAug unlocks policy generalisation to visually distinct novel locations, addressing the current scene generalisation limitations in robot learning.
[ "['Eugene Teoh' 'Sumit Patidar' 'Xiao Ma' 'Stephen James']" ]
null
null
2407.07873
null
null
http://arxiv.org/pdf/2407.07873v1
2024-07-10T17:39:50Z
2024-07-10T17:39:50Z
Dynamical Measure Transport and Neural PDE Solvers for Sampling
The task of sampling from a probability density can be approached as transporting a tractable density function to the target, known as dynamical measure transport. In this work, we tackle it through a principled unified framework using deterministic or stochastic evolutions described by partial differential equations (PDEs). This framework incorporates prior trajectory-based sampling methods, such as diffusion models or Schr"odinger bridges, without relying on the concept of time-reversals. Moreover, it allows us to propose novel numerical methods for solving the transport task and thus sampling from complicated targets without the need for the normalization constant or data samples. We employ physics-informed neural networks (PINNs) to approximate the respective PDE solutions, implying both conceptional and computational advantages. In particular, PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently, leading to significantly better mode coverage in the sampling task compared to alternative methods. Moreover, they can readily be fine-tuned with Gauss-Newton methods to achieve high accuracy in sampling.
[ "['Jingtong Sun' 'Julius Berner' 'Lorenz Richter' 'Marius Zeinhofer'\n 'Johannes Müller' 'Kamyar Azizzadenesheli' 'Anima Anandkumar']" ]
null
null
2407.07874
null
null
http://arxiv.org/pdf/2407.07874v2
2024-07-11T16:18:40Z
2024-07-10T17:40:30Z
Toto: Time Series Optimized Transformer for Observability
This technical report describes the Time Series Optimized Transformer for Observability (Toto), a new state of the art foundation model for time series forecasting developed by Datadog. In addition to advancing the state of the art on generalized time series benchmarks in domains such as electricity and weather, this model is the first general-purpose time series forecasting foundation model to be specifically tuned for observability metrics. Toto was trained on a dataset of one trillion time series data points, the largest among all currently published time series foundation models. Alongside publicly available time series datasets, 75% of the data used to train Toto consists of fully anonymous numerical metric data points from the Datadog platform. In our experiments, Toto outperforms existing time series foundation models on observability data. It does this while also excelling at general-purpose forecasting tasks, achieving state-of-the-art zero-shot performance on multiple open benchmark datasets.
[ "['Ben Cohen' 'Emaad Khwaja' 'Kan Wang' 'Charles Masson' 'Elise Ramé'\n 'Youssef Doubli' 'Othmane Abou-Amal']" ]
null
null
2407.07875
null
null
http://arxiv.org/pdf/2407.07875v1
2024-07-10T17:41:10Z
2024-07-10T17:41:10Z
Generative Image as Action Models
Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to 'draw joint-actions' as targets on RGB images. These images are fed into a controller that maps the visual targets into a sequence of joint-positions. We study GENIMA on 25 RLBench and 9 real-world manipulation tasks. We find that, by lifting actions into image-space, internet pre-trained diffusion models can generate policies that outperform state-of-the-art visuomotor approaches, especially in robustness to scene perturbations and generalizing to novel objects. Our method is also competitive with 3D agents, despite lacking priors such as depth, keypoints, or motion-planners.
[ "['Mohit Shridhar' 'Yat Long Lo' 'Stephen James']" ]
null
null
2407.07880
null
null
http://arxiv.org/pdf/2407.07880v1
2024-07-10T17:48:25Z
2024-07-10T17:48:25Z
Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
This study addresses the challenge of noise in training datasets for Direct Preference Optimization (DPO), a method for aligning Large Language Models (LLMs) with human preferences. We categorize noise into pointwise noise, which includes low-quality data points, and pairwise noise, which encompasses erroneous data pair associations that affect preference rankings. Utilizing Distributionally Robust Optimization (DRO), we enhance DPO's resilience to these types of noise. Our theoretical insights reveal that DPO inherently embeds DRO principles, conferring robustness to pointwise noise, with the regularization coefficient $beta$ playing a critical role in its noise resistance. Extending this framework, we introduce Distributionally Robustifying DPO (Dr. DPO), which integrates pairwise robustness by optimizing against worst-case pairwise scenarios. The novel hyperparameter $beta'$ in Dr. DPO allows for fine-tuned control over data pair reliability, providing a strategic balance between exploration and exploitation in noisy training environments. Empirical evaluations demonstrate that Dr. DPO substantially improves the quality of generated text and response accuracy in preference datasets, showcasing enhanced performance in both noisy and noise-free settings. The code is available at https://github.com/junkangwu/Dr_DPO.
[ "['Junkang Wu' 'Yuexiang Xie' 'Zhengyi Yang' 'Jiancan Wu' 'Jiawei Chen'\n 'Jinyang Gao' 'Bolin Ding' 'Xiang Wang' 'Xiangnan He']" ]
null
null
2407.07884
null
null
http://arxiv.org/pdf/2407.07884v1
2024-07-10T17:51:33Z
2024-07-10T17:51:33Z
Vegetable Peeling: A Case Study in Constrained Dexterous Manipulation
Recent studies have made significant progress in addressing dexterous manipulation problems, particularly in in-hand object reorientation. However, there are few existing works that explore the potential utilization of developed dexterous manipulation controllers for downstream tasks. In this study, we focus on constrained dexterous manipulation for food peeling. Food peeling presents various constraints on the reorientation controller, such as the requirement for the hand to securely hold the object after reorientation for peeling. We propose a simple system for learning a reorientation controller that facilitates the subsequent peeling task. Videos are available at: https://taochenshh.github.io/projects/veg-peeling.
[ "['Tao Chen' 'Eric Cousineau' 'Naveen Kuppuswamy' 'Pulkit Agrawal']" ]
null
null
2407.07885
null
null
http://arxiv.org/pdf/2407.07885v1
2024-07-10T17:52:30Z
2024-07-10T17:52:30Z
Learning In-Hand Translation Using Tactile Skin With Shear and Normal Force Sensing
Recent progress in reinforcement learning (RL) and tactile sensing has significantly advanced dexterous manipulation. However, these methods often utilize simplified tactile signals due to the gap between tactile simulation and the real world. We introduce a sensor model for tactile skin that enables zero-shot sim-to-real transfer of ternary shear and binary normal forces. Using this model, we develop an RL policy that leverages sliding contact for dexterous in-hand translation. We conduct extensive real-world experiments to assess how tactile sensing facilitates policy adaptation to various unseen object properties and robot hand orientations. We demonstrate that our 3-axis tactile policies consistently outperform baselines that use only shear forces, only normal forces, or only proprioception. Website: https://jessicayin.github.io/tactile-skin-rl/
[ "['Jessica Yin' 'Haozhi Qi' 'Jitendra Malik' 'James Pikul' 'Mark Yim'\n 'Tess Hellebrekers']" ]
null
null
2407.07889
null
null
http://arxiv.org/pdf/2407.07889v1
2024-07-10T17:57:04Z
2024-07-10T17:57:04Z
AdaptiGraph: Material-Adaptive Graph-Based Neural Dynamics for Robotic Manipulation
Predictive models are a crucial component of many robotic systems. Yet, constructing accurate predictive models for a variety of deformable objects, especially those with unknown physical properties, remains a significant challenge. This paper introduces AdaptiGraph, a learning-based dynamics modeling approach that enables robots to predict, adapt to, and control a wide array of challenging deformable materials with unknown physical properties. AdaptiGraph leverages the highly flexible graph-based neural dynamics (GBND) framework, which represents material bits as particles and employs a graph neural network (GNN) to predict particle motion. Its key innovation is a unified physical property-conditioned GBND model capable of predicting the motions of diverse materials with varying physical properties without retraining. Upon encountering new materials during online deployment, AdaptiGraph utilizes a physical property optimization process for a few-shot adaptation of the model, enhancing its fit to the observed interaction data. The adapted models can precisely simulate the dynamics and predict the motion of various deformable materials, such as ropes, granular media, rigid boxes, and cloth, while adapting to different physical properties, including stiffness, granular size, and center of pressure. On prediction and manipulation tasks involving a diverse set of real-world deformable objects, our method exhibits superior prediction accuracy and task proficiency over non-material-conditioned and non-adaptive models. The project page is available at https://robopil.github.io/adaptigraph/ .
[ "['Kaifeng Zhang' 'Baoyu Li' 'Kris Hauser' 'Yunzhu Li']" ]
null
null
2407.07890
null
null
http://arxiv.org/pdf/2407.07890v1
2024-07-10T17:57:58Z
2024-07-10T17:57:58Z
Training on the Test Task Confounds Evaluation and Emergence
We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of techniques to include task-relevant data in the pretraining stage of a language model. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for training on the test task by fine-tuning each model under comparison on the same task-relevant data before evaluation. We then show that instances of emergent behavior largely vanish once we adjust for training on the test task. This also applies to reported instances of emergent behavior that cannot be explained by the choice of evaluation metric. Our work promotes a new perspective on the evaluation of large language models with broad implications for benchmarking and the study of emergent capabilities.
[ "['Ricardo Dominguez-Olmedo' 'Florian E. Dorner' 'Moritz Hardt']" ]
null
null
2407.07895
null
null
http://arxiv.org/pdf/2407.07895v1
2024-07-10T17:59:43Z
2024-07-10T17:59:43Z
LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
Visual instruction tuning has made considerable strides in enhancing the capabilities of Large Multimodal Models (LMMs). However, existing open LMMs largely focus on single-image tasks, their applications to multi-image scenarios remains less explored. Additionally, prior LMM research separately tackles different scenarios, leaving it impossible to generalize cross scenarios with new emerging capabilities. To this end, we introduce LLaVA-NeXT-Interleave, which simultaneously tackles Multi-image, Multi-frame (video), Multi-view (3D), and Multi-patch (single-image) scenarios in LMMs. To enable these capabilities, we regard the interleaved data format as a general template and compile the M4-Instruct dataset with 1,177.6k samples, spanning 4 primary domains with 14 tasks and 41 datasets. We also curate the LLaVA-Interleave Bench to comprehensively evaluate the multi-image performance of LMMs. Through extensive experiments, LLaVA-NeXT-Interleave achieves leading results in multi-image, video, and 3D benchmarks, while maintaining the performance of single-image tasks. Besides, our model also exhibits several emerging capabilities, e.g., transferring tasks across different settings and modalities. Code is available at https://github.com/LLaVA-VL/LLaVA-NeXT
[ "['Feng Li' 'Renrui Zhang' 'Hao Zhang' 'Yuanhan Zhang' 'Bo Li' 'Wei Li'\n 'Zejun Ma' 'Chunyuan Li']" ]