categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.06206 | null | null | http://arxiv.org/pdf/2407.06206v1 | 2024-07-01T21:09:31Z | 2024-07-01T21:09:31Z | The Impact of an XAI-Augmented Approach on Binary Classification with
Scarce Data | Point-of-Care Ultrasound (POCUS) is the practice of clinicians conducting and interpreting ultrasound scans right at the patient's bedside. However, the expertise needed to interpret these images is considerable and may not always be present in emergency situations. This reality makes algorithms such as machine learning classifiers extremely valuable to augment human decisions. POCUS devices are becoming available at a reasonable cost in the size of a mobile phone. The challenge of turning POCUS devices into life-saving tools is that interpretation of ultrasound images requires specialist training and experience. Unfortunately, the difficulty to obtain positive training images represents an important obstacle to building efficient and accurate classifiers. Hence, the problem we try to investigate is how to explore strategies to increase accuracy of classifiers trained with scarce data. We hypothesize that training with a few data instances may not suffice for classifiers to generalize causing them to overfit. Our approach uses an Explainable AI-Augmented approach to help the algorithm learn more from less and potentially help the classifier better generalize. | [
"['Ximing Wen' 'Rosina O. Weber' 'Anik Sen' 'Darryl Hannan'\n 'Steven C. Nesbit' 'Vincent Chan' 'Alberto Goffi' 'Michael Morris'\n 'John C. Hunninghake' 'Nicholas E. Villalobos' 'Edward Kim'\n 'Christopher J. MacLellan']"
] |
null | null | 2407.06209 | null | null | http://arxiv.org/pdf/2407.06209v1 | 2024-07-03T16:39:32Z | 2024-07-03T16:39:32Z | Self-supervised Pretraining for Partial Differential Equations | In this work, we describe a novel approach to building a neural PDE solver leveraging recent advances in transformer based neural network architectures. Our model can provide solutions for different values of PDE parameters without any need for retraining the network. The training is carried out in a self-supervised manner, similar to pretraining approaches applied in language and vision tasks. We hypothesize that the model is in effect learning a family of operators (for multiple parameters) mapping the initial condition to the solution of the PDE at any future time step t. We compare this approach with the Fourier Neural Operator (FNO), and demonstrate that it can generalize over the space of PDE parameters, despite having a higher prediction error for individual parameter values compared to the FNO. We show that performance on a specific parameter can be improved by finetuning the model with very small amounts of data. We also demonstrate that the model scales with data as well as model size. | [
"['Varun Madhavan' 'Amal S Sebastian' 'Bharath Ramsundar'\n 'Venkatasubramanian Viswanathan']"
] |
null | null | 2407.06211 | null | null | http://arxiv.org/pdf/2407.06211v1 | 2024-07-03T17:13:04Z | 2024-07-03T17:13:04Z | Synthetic data: How could it be used for infectious disease research? | Over the last three to five years, it has become possible to generate machine learning synthetic data for healthcare-related uses. However, concerns have been raised about potential negative factors associated with the possibilities of artificial dataset generation. These include the potential misuse of generative artificial intelligence (AI) in fields such as cybercrime, the use of deepfakes and fake news to deceive or manipulate, and displacement of human jobs across various market sectors. Here, we consider both current and future positive advances and possibilities with synthetic datasets. Synthetic data offers significant benefits, particularly in data privacy, research, in balancing datasets and reducing bias in machine learning models. Generative AI is an artificial intelligence genre capable of creating text, images, video or other data using generative models. The recent explosion of interest in GenAI was heralded by the invention and speedy move to use of large language models (LLM). These computational models are able to achieve general-purpose language generation and other natural language processing tasks and are based on transformer architectures, which made an evolutionary leap from previous neural network architectures. Fuelled by the advent of improved GenAI techniques and wide scale usage, this is surely the time to consider how synthetic data can be used to advance infectious disease research. In this commentary we aim to create an overview of the current and future position of synthetic data in infectious disease research. | [
"['Styliani-Christina Fragkouli' 'Dhwani Solanki' 'Leyla J Castro'\n 'Fotis E Psomopoulos' 'Núria Queralt-Rosinach' 'Davide Cirillo'\n 'Lisa C Crossman']"
] |
null | null | 2407.06212 | null | null | http://arxiv.org/pdf/2407.06212v1 | 2024-07-04T08:02:34Z | 2024-07-04T08:02:34Z | Bias Correction in Machine Learning-based Classification of Rare Events | Online platform businesses can be identified by using web-scraped texts. This is a classification problem that combines elements of natural language processing and rare event detection. Because online platforms are rare, accurately identifying them with Machine Learning algorithms is challenging. Here, we describe the development of a Machine Learning-based text classification approach that reduces the number of false positives as much as possible. It greatly reduces the bias in the estimates obtained by using calibrated probabilities and ensembles. | [
"['Luuk Gubbels' 'Marco Puts' 'Piet Daas']"
] |
null | null | 2407.06216 | null | null | http://arxiv.org/pdf/2407.06216v2 | 2024-07-10T15:06:33Z | 2024-07-04T17:20:36Z | Digital twin with automatic disturbance detection for real-time
optimization of a semi-autogenous grinding (SAG) mill | This work describes the development and validation of a digital twin for a semi-autogenous grinding (SAG) mill controlled by an expert system. The digital twin consists of three modules emulating a closed-loop system: fuzzy logic for the expert control, a state-space model for regulatory control, and a recurrent neural network for the SAG mill process. The model was trained with 68 hours of data and validated with 8 hours of test data. It predicts the mill's behavior within a 2.5-minute horizon with a 30-second sampling time. The disturbance detection evaluates the need for retraining, and the digital twin shows promise for supervising the SAG mill with the expert control system. Future work will focus on integrating this digital twin into real-time optimization strategies with industrial validation. | [
"['Paulina Quintanilla' 'Francisco Fernández' 'Cristobal Mancilla'\n 'Matías Rojas' 'Mauricio Estrada' 'Daniel Navia']"
] |
null | null | 2407.06221 | null | null | http://arxiv.org/pdf/2407.06221v1 | 2024-07-05T07:24:49Z | 2024-07-05T07:24:49Z | Hybrid Machine Learning Approach For Real-Time Malicious Url Detection
Using Som-Rmo And Rbfn With Tabu Search Optimization | The proliferation of malicious URLs has become a significant threat to internet security, encompassing SPAM, phishing, malware, and defacement attacks. Traditional detection methods struggle to keep pace with the evolving nature of these threats. Detecting malicious URLs in real-time requires advanced techniques capable of handling large datasets and identifying novel attack patterns. The challenge lies in developing a robust model that combines efficient feature extraction with accurate classification. We propose a hybrid machine learning approach combining Self-Organizing Map based Radial Movement Optimization (SOM-RMO) for feature extraction and Radial Basis Function Network (RBFN) based Tabu Search for classification. SOM-RMO effectively reduces dimensionality and highlights significant features, while RBFN, optimized with Tabu Search, classifies URLs with high precision. The proposed model demonstrates superior performance in detecting various malicious URL attacks. On a benchmark dataset, our approach achieved an accuracy of 96.5%, precision of 95.2%, recall of 94.8%, and an F1-score of 95.0%, outperforming traditional methods significantly. | [
"['Swetha T' 'Seshaiah M' 'Hemalatha KL' 'ManjunathaKumar BH' 'Murthy SVN']"
] |
null | null | 2407.06226 | null | null | http://arxiv.org/pdf/2407.06226v1 | 2024-07-06T14:16:31Z | 2024-07-06T14:16:31Z | Quantum Machine Learning with Application to Progressive Supranuclear
Palsy Network Classification | Machine learning and quantum computing are being progressively explored to shed light on possible computational approaches to deal with hitherto unsolvable problems. Classical methods for machine learning are ubiquitous in pattern recognition, with support vector machines (SVMs) being a prominent technique for network classification. However, there are limitations to the successful resolution of such classification instances when the input feature space becomes large, and the successive evaluation of so-called kernel functions becomes computationally exorbitant. The use of principal component analysis (PCA) substantially minimizes the dimensionality of feature space thereby enabling computational speed-ups of supervised learning: the creation of a classifier. Further, the application of quantum-based learning to the PCA reduced input feature space might offer an exponential speedup with fewer parameters. The present learning model is evaluated on a real clinical application: the diagnosis of Progressive Supranuclear Palsy (PSP) disorder. The results suggest that quantum machine learning has led to noticeable advancement and outperforms classical frameworks. The optimized variational quantum classifier classifies the PSP dataset with 86% accuracy as compared to conventional SVM. The other technique, a quantum kernel estimator, approximates the kernel function on the quantum machine and optimizes a classical SVM. In particular, we have demonstrated the successful application of the present model on both a quantum simulator and real chips of the IBM quantum platform. | [
"['Papri Saha']"
] |
null | null | 2407.06237 | null | null | http://arxiv.org/pdf/2407.06237v1 | 2024-07-07T19:41:38Z | 2024-07-07T19:41:38Z | Discounted Pseudocosts in MILP | In this article, we introduce the concept of discounted pseudocosts, inspired by discounted total reward in reinforcement learning, and explore their application in mixed-integer linear programming (MILP). Traditional pseudocosts estimate changes in the objective function due to variable bound changes during the branch-and-bound process. By integrating reinforcement learning concepts, we propose a novel approach incorporating a forward-looking perspective into pseudocost estimation. We present the motivation behind discounted pseudocosts and discuss how they represent the anticipated reward for branching after one level of exploration in the MILP problem space. Initial experiments on MIPLIB 2017 benchmark instances demonstrate the potential of discounted pseudocosts to enhance branching strategies and accelerate the solution process for challenging MILP problems. | [
"['Krunal Kishor Patel']"
] |
null | null | 2407.06245 | null | null | http://arxiv.org/pdf/2407.06245v2 | 2024-07-13T22:48:44Z | 2024-07-08T13:07:50Z | ORAN-Bench-13K: An Open Source Benchmark for Assessing LLMs in Open
Radio Access Networks | Large Language Models (LLMs) can revolutionize how we deploy and operate Open Radio Access Networks (O-RAN) by enhancing network analytics, anomaly detection, and code generation and significantly increasing the efficiency and reliability of a plethora of O-RAN tasks. In this paper, we present ORAN-Bench-13K, the first comprehensive benchmark designed to evaluate the performance of Large Language Models (LLMs) within the context of O-RAN. Our benchmark consists of 13,952 meticulously curated multiple-choice questions generated from 116 O-RAN specification documents. We leverage a novel three-stage LLM framework, and the questions are categorized into three distinct difficulties to cover a wide spectrum of ORAN-related knowledge. We thoroughly evaluate the performance of several state-of-the-art LLMs, including Gemini, Chat-GPT, and Mistral. Additionally, we propose ORANSight, a Retrieval-Augmented Generation (RAG)-based pipeline that demonstrates superior performance on ORAN-Bench-13K compared to other tested closed-source models. Our findings indicate that current popular LLM models are not proficient in O-RAN, highlighting the need for specialized models. We observed a noticeable performance improvement when incorporating the RAG-based ORANSight pipeline, with a Macro Accuracy of 0.784 and a Weighted Accuracy of 0.776, which was on average 21.55% and 22.59% better than the other tested LLMs. | [
"['Pranshav Gajjar' 'Vijay K. Shah']"
] |
null | null | 2407.06286 | null | null | http://arxiv.org/pdf/2407.06286v1 | 2024-07-08T18:02:18Z | 2024-07-08T18:02:18Z | Characterization of topological structures in different neural network
architectures | One of the most crucial tasks in the future will be to understand what is going on in neural networks, as they will become even more powerful and widely deployed. This work aims to use TDA methods to analyze neural representations. We develop methods for analyzing representations from different architectures and check how one should use them to obtain valid results. Our findings indicate that removing outliers does not have much impact on the results and that we should compare representations with the same number of elements. We applied these methods for ResNet, VGG19, and ViT architectures and found substantial differences along with some similarities. Additionally, we determined that models with similar architecture tend to have a similar topology of representations and models with a larger number of layers change their topology more smoothly. Furthermore, we found that the topology of pre-trained and finetuned models starts to differ in the middle and final layers while remaining quite similar in the initial layers. These findings demonstrate the efficacy of TDA in the analysis of neural network behavior. | [
"['Paweł Świder']"
] |
null | null | 2407.06295 | null | null | http://arxiv.org/pdf/2407.06295v1 | 2024-07-08T18:05:11Z | 2024-07-08T18:05:11Z | Engineering morphogenesis of cell clusters with differentiable
programming | Understanding the rules underlying organismal development is a major unsolved problem in biology. Each cell in a developing organism responds to signals in its local environment by dividing, excreting, consuming, or reorganizing, yet how these individual actions coordinate over a macroscopic number of cells to grow complex structures with exquisite functionality is unknown. Here we use recent advances in automatic differentiation to discover local interaction rules and genetic networks that yield emergent, systems-level characteristics in a model of development. We consider a growing tissue with cellular interactions are mediated by morphogen diffusion, differential cell adhesion and mechanical stress. Each cell has an internal genetic network that it uses to make decisions based on its local environment. We show that one can simultaneously learn parameters governing the cell interactions and the genetic network for complex developmental scenarios, including the symmetry breaking of an embryo from an initial cell, the creation of emergent chemical gradients,homogenization of growth via mechanical stress, programmed growth into a prespecified shape, and the ability to repair from damage. When combined with recent experimental advances measuring spatio-temporal dynamics and gene expression of cells in a growing tissue, the methodology outlined here offers a promising path to unravelling the cellular basis of development. | [
"['Ramya Deshpande' 'Francesco Mottes' 'Ariana-Dalia Vlad'\n 'Michael P. Brenner' 'Alma dal Co']"
] |
null | null | 2407.06298 | null | null | http://arxiv.org/pdf/2407.06298v1 | 2024-07-08T18:07:33Z | 2024-07-08T18:07:33Z | Multi-Label Plant Species Classification with Self-Supervised Vision
Transformers | We present a transfer learning approach using a self-supervised Vision Transformer (DINOv2) for the PlantCLEF 2024 competition, focusing on the multi-label plant species classification. Our method leverages both base and fine-tuned DINOv2 models to extract generalized feature embeddings. We train classifiers to predict multiple plant species within a single image using these rich embeddings. To address the computational challenges of the large-scale dataset, we employ Spark for distributed data processing, ensuring efficient memory management and processing across a cluster of workers. Our data processing pipeline transforms images into grids of tiles, classifying each tile, and aggregating these predictions into a consolidated set of probabilities. Our results demonstrate the efficacy of combining transfer learning with advanced data processing techniques for multi-label image classification tasks. Our code is available at https://github.com/dsgt-kaggle-clef/plantclef-2024. | [
"['Murilo Gustineli' 'Anthony Miyaguchi' 'Ian Stalter']"
] |
null | null | 2407.06303 | null | null | http://arxiv.org/pdf/2407.06303v1 | 2024-07-08T18:12:29Z | 2024-07-08T18:12:29Z | Unsupervised Fault Detection using SAM with a Moving Window Approach | Automated f ault detection and monitoring in engineering are critical but frequently difficult owing to the necessity for collecting and labeling large amounts of defective samples . We present an unsupervised method that uses the high end Segment Anything Model (SAM) and a moving window approach. SAM has gained recognition in AI image segmentation communities for its accuracy and versatility. However, its performance can be inconsistent when dealing with certain unexpected shapes , such as shadows and subtle surface irregularities. This limitation raise s concerns about its applicability for fault detection in real world scenarios We aim to overcome these challenges without requiring fine tun ing or labeled data. Our technique divides pictures into smaller windows, which are subsequently processed using SAM. This increases the accuracy of fault identification by focusing on localized details. We compute the sizes of the segmented sections and then us e a clustering technique to discover consistent fault areas while filtering out noise. To further improve the method's robustness , we propose adding the Exponentially Weighted Moving Average (EWMA) technique for continuous monitoring in industrial settings, which would improve the method's capacity to trace faults over time. We compare our method to various well established methods u sing a real case study where our model achieve s 0.96 accuracy compared to 0. 8 5 for the second best method. W e also compare our method us ing two open source datasets where our model attains a consistent 0. 86 accuracy across the datasets compared to 0.53 and 0.54 for second best model s. | [
"['Ahmed Maged' 'Herman Shen']"
] |
null | null | 2407.06310 | null | null | http://arxiv.org/pdf/2407.06310v1 | 2024-07-08T18:20:24Z | 2024-07-08T18:20:24Z | Homogeneous Speaker Features for On-the-Fly Dysarthric and Elderly
Speaker Adaptation | The application of data-intensive automatic speech recognition (ASR) technologies to dysarthric and elderly adult speech is confronted by their mismatch against healthy and nonaged voices, data scarcity and large speaker-level variability. To this end, this paper proposes two novel data-efficient methods to learn homogeneous dysarthric and elderly speaker-level features for rapid, on-the-fly test-time adaptation of DNN/TDNN and Conformer ASR models. These include: 1) speaker-level variance-regularized spectral basis embedding (VR-SBE) features that exploit a special regularization term to enforce homogeneity of speaker features in adaptation; and 2) feature-based learning hidden unit contributions (f-LHUC) transforms that are conditioned on VR-SBE features. Experiments are conducted on four tasks across two languages: the English UASpeech and TORGO dysarthric speech datasets, the English DementiaBank Pitt and Cantonese JCCOCC MoCA elderly speech corpora. The proposed on-the-fly speaker adaptation techniques consistently outperform baseline iVector and xVector adaptation by statistically significant word or character error rate reductions up to 5.32% absolute (18.57% relative) and batch-mode LHUC speaker adaptation by 2.24% absolute (9.20% relative), while operating with real-time factors speeding up to 33.6 times against xVectors during adaptation. The efficacy of the proposed adaptation techniques is demonstrated in a comparison against current ASR technologies including SSL pre-trained systems on UASpeech, where our best system produces a state-of-the-art WER of 23.33%. Analyses show VR-SBE features and f-LHUC transforms are insensitive to speaker-level data quantity in testtime adaptation. T-SNE visualization reveals they have stronger speaker-level homogeneity than baseline iVectors, xVectors and batch-mode LHUC transforms. | [
"['Mengzhe Geng' 'Xurong Xie' 'Jiajun Deng' 'Zengrui Jin' 'Guinan Li'\n 'Tianzi Wang' 'Shujie Hu' 'Zhaoqing Li' 'Helen Meng' 'Xunying Liu']"
] |
null | null | 2407.06312 | null | null | http://arxiv.org/pdf/2407.06312v1 | 2024-07-08T18:24:48Z | 2024-07-08T18:24:48Z | Limits and Powers of Koopman Learning | Dynamical systems provide a comprehensive way to study complex and changing behaviors across various sciences. Many modern systems are too complicated to analyze directly or we do not have access to models, driving significant interest in learning methods. Koopman operators have emerged as a dominant approach because they allow the study of nonlinear dynamics using linear techniques by solving an infinite-dimensional spectral problem. However, current algorithms face challenges such as lack of convergence, hindering practical progress. This paper addresses a fundamental open question: textit{When can we robustly learn the spectral properties of Koopman operators from trajectory data of dynamical systems, and when can we not?} Understanding these boundaries is crucial for analysis, applications, and designing algorithms. We establish a foundational approach that combines computational analysis and ergodic theory, revealing the first fundamental barriers -- universal for any algorithm -- associated with system geometry and complexity, regardless of data quality and quantity. For instance, we demonstrate well-behaved smooth dynamical systems on tori where non-trivial eigenfunctions of the Koopman operator cannot be determined by any sequence of (even randomized) algorithms, even with unlimited training data. Additionally, we identify when learning is possible and introduce optimal algorithms with verification that overcome issues in standard methods. These results pave the way for a sharp classification theory of data-driven dynamical systems based on how many limits are needed to solve a problem. These limits characterize all previous methods, presenting a unified view. Our framework systematically determines when and how Koopman spectral properties can be learned. | [
"['Matthew J. Colbrook' 'Igor Mezić' 'Alexei Stepanenko']"
] |
null | null | 2407.06315 | null | null | http://arxiv.org/pdf/2407.06315v2 | 2024-07-11T11:11:03Z | 2024-07-08T18:31:19Z | Shedding More Light on Robust Classifiers under the lens of Energy-based
Models | By reinterpreting a robust discriminative classifier as Energy-based Model (EBM), we offer a new take on the dynamics of adversarial training (AT). Our analysis of the energy landscape during AT reveals that untargeted attacks generate adversarial images much more in-distribution (lower energy) than the original data from the point of view of the model. Conversely, we observe the opposite for targeted attacks. On the ground of our thorough analysis, we present new theoretical and practical results that show how interpreting AT energy dynamics unlocks a better understanding: (1) AT dynamic is governed by three phases and robust overfitting occurs in the third phase with a drastic divergence between natural and adversarial energies (2) by rewriting the loss of TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) in terms of energies, we show that TRADES implicitly alleviates overfitting by means of aligning the natural energy with the adversarial one (3) we empirically show that all recent state-of-the-art robust classifiers are smoothing the energy landscape and we reconcile a variety of studies about understanding AT and weighting the loss function under the umbrella of EBMs. Motivated by rigorous evidence, we propose Weighted Energy Adversarial Training (WEAT), a novel sample weighting scheme that yields robust accuracy matching the state-of-the-art on multiple benchmarks such as CIFAR-10 and SVHN and going beyond in CIFAR-100 and Tiny-ImageNet. We further show that robust classifiers vary in the intensity and quality of their generative capabilities, and offer a simple method to push this capability, reaching a remarkable Inception Score (IS) and FID using a robust classifier without training for generative modeling. The code to reproduce our results is available at http://github.com/OmnAI-Lab/Robust-Classifiers-under-the-lens-of-EBM/ . | [
"['Mujtaba Hussain Mirza' 'Maria Rosaria Briglia' 'Senad Beadini'\n 'Iacopo Masi']"
] |
null | null | 2407.06321 | null | null | http://arxiv.org/pdf/2407.06321v1 | 2024-07-08T18:38:11Z | 2024-07-08T18:38:11Z | Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with
Bernoulli Rewards | We consider Kernelized Bandits (KBs) to optimize a function $f : mathcal{X} rightarrow [0,1]$ belonging to the Reproducing Kernel Hilbert Space (RKHS) $mathcal{H}_k$. Mainstream works on kernelized bandits focus on a subgaussian noise model in which observations of the form $f(mathbf{x}_t)+epsilon_t$, being $epsilon_t$ a subgaussian noise, are available (Chowdhury and Gopalan, 2017). Differently, we focus on the case in which we observe realizations $y_t sim text{Ber}(f(mathbf{x}_t))$ sampled from a Bernoulli distribution with parameter $f(mathbf{x}_t)$. While the Bernoulli model has been investigated successfully in multi-armed bandits (Garivier and Capp'e, 2011), logistic bandits (Faury et al., 2022), bandits in metric spaces (Magureanu et al., 2014), it remains an open question whether tight results can be obtained for KBs. This paper aims to draw the attention of the online learning community to this open problem. | [
"['Marco Mussi' 'Simone Drago' 'Alberto Maria Metelli']"
] |
null | null | 2407.06322 | null | null | http://arxiv.org/pdf/2407.06322v1 | 2024-07-08T18:38:52Z | 2024-07-08T18:38:52Z | MagMax: Leveraging Model Merging for Seamless Continual Learning | This paper introduces a continual learning approach named MagMax, which utilizes model merging to enable large pre-trained models to continuously learn from new data without forgetting previously acquired knowledge. Distinct from traditional continual learning methods that aim to reduce forgetting during task training, MagMax combines sequential fine-tuning with a maximum magnitude weight selection for effective knowledge integration across tasks. Our initial contribution is an extensive examination of model merging techniques, revealing that simple approaches like weight averaging and random weight selection surprisingly hold up well in various continual learning contexts. More importantly, we present MagMax, a novel model-merging strategy that enables continual learning of large pre-trained models for successive tasks. Our thorough evaluation demonstrates the superiority of MagMax in various scenarios, including class- and domain-incremental learning settings. | [
"['Daniel Marczak' 'Bartłomiej Twardowski' 'Tomasz Trzciński'\n 'Sebastian Cygert']"
] |
null | null | 2407.06324 | null | null | http://arxiv.org/pdf/2407.06324v1 | 2024-07-08T18:41:01Z | 2024-07-08T18:41:01Z | B'MOJO: Hybrid State Space Realizations of Foundation Models with
Eidetic and Fading Memory | We describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a-priori unknown bound while making efficient use of finite resources for inference. Current architectures use such resources to represent data either eidetically over a finite span ("context" in Transformers), or fading over an infinite span (in State Space Models, or SSMs). Recent hybrid architectures have combined eidetic and fading memory, but with limitations that do not allow the designer or the learning process to seamlessly modulate the two, nor to extend the eidetic memory span. We leverage ideas from Stochastic Realization Theory to develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an elementary composable module. The overall architecture can be used to implement models that can access short-term eidetic memory "in-context," permanent structural memory "in-weights," fading memory "in-state," and long-term eidetic memory "in-storage" by natively incorporating retrieval from an asynchronously updated memory. We show that Transformers, existing SSMs such as Mamba, and hybrid architectures such as Jamba are special cases of B'MOJO and describe a basic implementation, to be open sourced, that can be stacked and scaled efficiently in hardware. We test B'MOJO on transductive inference tasks, such as associative recall, where it outperforms existing SSMs and Hybrid models; as a baseline, we test ordinary language modeling where B'MOJO achieves perplexity comparable to similarly-sized Transformers and SSMs up to 1.4B parameters, while being up to 10% faster to train. Finally, we show that B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens, four-fold the length of the longest sequences seen during training. | [
"['Luca Zancato' 'Arjun Seshadri' 'Yonatan Dukler' 'Aditya Golatkar'\n 'Yantao Shen' 'Benjamin Bowman' 'Matthew Trager' 'Alessandro Achille'\n 'Stefano Soatto']"
] |
null | null | 2407.06325 | null | null | http://arxiv.org/pdf/2407.06325v1 | 2024-07-08T18:42:50Z | 2024-07-08T18:42:50Z | CONGO: Compressive Online Gradient Optimization with Application to
Microservices Management | We address the challenge of online convex optimization where the objective function's gradient exhibits sparsity, indicating that only a small number of dimensions possess non-zero gradients. Our aim is to leverage this sparsity to obtain useful estimates of the objective function's gradient even when the only information available is a limited number of function samples. Our motivation stems from distributed queueing systems like microservices-based applications, characterized by request-response workloads. Here, each request type proceeds through a sequence of microservices to produce a response, and the resource allocation across the collection of microservices is controlled to balance end-to-end latency with resource costs. While the number of microservices is substantial, the latency function primarily reacts to resource changes in a few, rendering the gradient sparse. Our proposed method, CONGO (Compressive Online Gradient Optimization), combines simultaneous perturbation with compressive sensing to estimate gradients. We establish analytical bounds on the requisite number of compressive sensing samples per iteration to maintain bounded bias of gradient estimates, ensuring sub-linear regret. By exploiting sparsity, we reduce the samples required per iteration to match the gradient's sparsity, rather than the problem's original dimensionality. Numerical experiments and real-world microservices benchmarks demonstrate CONGO's superiority over multiple stochastic gradient descent approaches, as it quickly converges to performance comparable to policies pre-trained with workload awareness. | [
"['Jeremy Carleton' 'Prathik Vijaykumar' 'Divyanshu Saxena'\n 'Dheeraj Narasimha' 'Srinivas Shakkottai' 'Aditya Akella']"
] |
null | null | 2407.06329 | null | null | http://arxiv.org/pdf/2407.06329v1 | 2024-07-08T18:47:59Z | 2024-07-08T18:47:59Z | Solving Multi-Model MDPs by Coordinate Ascent and Dynamic Programming | Multi-model Markov decision process (MMDP) is a promising framework for computing policies that are robust to parameter uncertainty in MDPs. MMDPs aim to find a policy that maximizes the expected return over a distribution of MDP models. Because MMDPs are NP-hard to solve, most methods resort to approximations. In this paper, we derive the policy gradient of MMDPs and propose CADP, which combines a coordinate ascent method and a dynamic programming algorithm for solving MMDPs. The main innovation of CADP compared with earlier algorithms is to take the coordinate ascent perspective to adjust model weights iteratively to guarantee monotone policy improvements to a local maximum. A theoretical analysis of CADP proves that it never performs worse than previous dynamic programming algorithms like WSU. Our numerical results indicate that CADP substantially outperforms existing methods on several benchmark problems. | [
"['Xihong Su' 'Marek Petrik']"
] |
null | null | 2407.06333 | null | null | http://arxiv.org/pdf/2407.06333v2 | 2024-07-10T09:43:58Z | 2024-07-08T18:55:57Z | A third-order finite difference weighted essentially non-oscillatory
scheme with shallow neural network | In this paper, we introduce the finite difference weighted essentially non-oscillatory (WENO) scheme based on the neural network for hyperbolic conservation laws. We employ the supervised learning and design two loss functions, one with the mean squared error and the other with the mean squared logarithmic error, where the WENO3-JS weights are computed as the labels. Each loss function consists of two components where the first component compares the difference between the weights from the neural network and WENO3-JS weights, while the second component matches the output weights of the neural network and the linear weights. The former of the loss function enforces the neural network to follow the WENO properties, implying that there is no need for the post-processing layer. Additionally the latter leads to better performance around discontinuities. As a neural network structure, we choose the shallow neural network (SNN) for computational efficiency with the Delta layer consisting of the normalized undivided differences. These constructed WENO3-SNN schemes show the outperformed results in one-dimensional examples and improved behavior in two-dimensional examples, compared with the simulations from WENO3-JS and WENO3-Z. | [
"['Kwanghyuk Park' 'Xinjuan Chen' 'Dongjin Lee' 'Jiaxi Gu' 'Jae-Hun Jung']"
] |
null | null | 2407.06343 | null | null | http://arxiv.org/pdf/2407.06343v1 | 2024-07-08T19:24:21Z | 2024-07-08T19:24:21Z | Novel Models for High-Dimensional Imaging: High-Resolution fMRI
Acceleration and Quantification | The goals of functional Magnetic Resonance Imaging (fMRI) include high spatial and temporal resolutions with a high signal-to-noise ratio (SNR). To simultaneously improve spatial and temporal resolutions and maintain the high SNR advantage of OSSI, we present novel pipelines for fast acquisition and high-resolution fMRI reconstruction and physics parameter quantification. We propose a patch-tensor low-rank model, a physics-based manifold model, and a voxel-wise attention network. With novel models for acquisition and reconstruction, we demonstrate that we can improve SNR and resolution simultaneously without compromising scan time. All the proposed models outperform other comparison approaches with higher resolution and more functional information. | [
"['Shouchang Guo']"
] |
null | null | 2407.06346 | null | null | http://arxiv.org/abs/2407.06346v1 | 2024-07-08T19:34:39Z | 2024-07-08T19:34:39Z | High-Dimensional Distributed Sparse Classification with Scalable
Communication-Efficient Global Updates | As the size of datasets used in statistical learning continues to grow, distributed training of models has attracted increasing attention. These methods partition the data and exploit parallelism to reduce memory and runtime, but suffer increasingly from communication costs as the data size or the number of iterations grows. Recent work on linear models has shown that a surrogate likelihood can be optimized locally to iteratively improve on an initial solution in a communication-efficient manner. However, existing versions of these methods experience multiple shortcomings as the data size becomes massive, including diverging updates and efficiently handling sparsity. In this work we develop solutions to these problems which enable us to learn a communication-efficient distributed logistic regression model even beyond millions of features. In our experiments we demonstrate a large improvement in accuracy over distributed algorithms with only a few distributed update steps needed, and similar or faster runtimes. Our code is available at url{https://github.com/FutureComputing4AI/ProxCSL}. | [
"['Fred Lu' 'Ryan R. Curtin' 'Edward Raff' 'Francis Ferraro' 'James Holt']"
] |
null | null | 2407.06372 | null | null | http://arxiv.org/pdf/2407.06372v1 | 2024-07-08T20:32:19Z | 2024-07-08T20:32:19Z | Non-Robust Features are Not Always Useful in One-Class Classification | The robustness of machine learning models has been questioned by the existence of adversarial examples. We examine the threat of adversarial examples in practical applications that require lightweight models for one-class classification. Building on Ilyas et al. (2019), we investigate the vulnerability of lightweight one-class classifiers to adversarial attacks and possible reasons for it. Our results show that lightweight one-class classifiers learn features that are not robust (e.g. texture) under stronger attacks. However, unlike in multi-class classification (Ilyas et al., 2019), these non-robust features are not always useful for the one-class task, suggesting that learning these unpredictive and non-robust features is an unwanted consequence of training. | [
"['Matthew Lau' 'Haoran Wang' 'Alec Helbling' 'Matthew Hul' 'ShengYun Peng'\n 'Martin Andreoni' 'Willian T. Lunardi' 'Wenke Lee']"
] |
null | null | 2407.06390 | null | null | http://arxiv.org/pdf/2407.06390v1 | 2024-07-08T21:03:15Z | 2024-07-08T21:03:15Z | JANET: Joint Adaptive predictioN-region Estimation for Time-series | Conformal prediction provides machine learning models with prediction sets that offer theoretical guarantees, but the underlying assumption of exchangeability limits its applicability to time series data. Furthermore, existing approaches struggle to handle multi-step ahead prediction tasks, where uncertainty estimates across multiple future time points are crucial. We propose JANET (Joint Adaptive predictioN-region Estimation for Time-series), a novel framework for constructing conformal prediction regions that are valid for both univariate and multivariate time series. JANET generalises the inductive conformal framework and efficiently produces joint prediction regions with controlled K-familywise error rates, enabling flexible adaptation to specific application needs. Our empirical evaluation demonstrates JANET's superior performance in multi-step prediction tasks across diverse time series datasets, highlighting its potential for reliable and interpretable uncertainty quantification in sequential data. | [
"['Eshant English' 'Eliot Wong-Toi' 'Matteo Fontana' 'Stephan Mandt'\n 'Padhraic Smyth' 'Christoph Lippert']"
] |
null | null | 2407.06411 | null | null | http://arxiv.org/pdf/2407.06411v1 | 2024-07-08T21:40:23Z | 2024-07-08T21:40:23Z | If You Don't Understand It, Don't Use It: Eliminating Trojans with
Filters Between Layers | Large language models (LLMs) sometimes exhibit dangerous unintended behaviors. Finding and fixing these is challenging because the attack surface is massive -- it is not tractable to exhaustively search for all possible inputs that may elicit such behavior. One specific and particularly challenging case is that if data-poisoning-injected trojans, since there is no way to know what they are to search for them. To our knowledge, there is no generally applicable method to unlearn unknown trojans injected during pre-training. This work seeks to provide a general purpose recipe (filters) and a specific implementation (LoRA) filters that work in practice on small to medium sized models. The focus is primarily empirical, though some perplexing behavior opens the door to the fundamental question of how LLMs store and process information. Not unexpectedly, we find that our filters work best on the residual stream and the latest layers. | [
"['Adriano Hernandez']"
] |
null | null | 2407.06418 | null | null | http://arxiv.org/pdf/2407.06418v1 | 2024-07-08T21:57:28Z | 2024-07-08T21:57:28Z | System stabilization with policy optimization on unstable latent
manifolds | Stability is a basic requirement when studying the behavior of dynamical systems. However, stabilizing dynamical systems via reinforcement learning is challenging because only little data can be collected over short time horizons before instabilities are triggered and data become meaningless. This work introduces a reinforcement learning approach that is formulated over latent manifolds of unstable dynamics so that stabilizing policies can be trained from few data samples. The unstable manifolds are minimal in the sense that they contain the lowest dimensional dynamics that are necessary for learning policies that guarantee stabilization. This is in stark contrast to generic latent manifolds that aim to approximate all -- stable and unstable -- system dynamics and thus are higher dimensional and often require higher amounts of data. Experiments demonstrate that the proposed approach stabilizes even complex physical systems from few data samples for which other methods that operate either directly in the system state space or on generic latent manifolds fail. | [
"['Steffen W. R. Werner' 'Benjamin Peherstorfer']"
] |
null | null | 2407.06438 | null | null | http://arxiv.org/pdf/2407.06438v1 | 2024-07-08T22:40:15Z | 2024-07-08T22:40:15Z | A Single Transformer for Scalable Vision-Language Modeling | We present SOLO, a single transformer for Scalable visiOn-Language mOdeling. Current large vision-language models (LVLMs) such as LLaVA mostly employ heterogeneous architectures that connect pre-trained visual encoders with large language models (LLMs) to facilitate visual recognition and complex reasoning. Although achieving remarkable performance with relatively lightweight training, we identify four primary scalability limitations: (1) The visual capacity is constrained by pre-trained visual encoders, which are typically an order of magnitude smaller than LLMs. (2) The heterogeneous architecture complicates the use of established hardware and software infrastructure. (3) Study of scaling laws on such architecture must consider three separate components - visual encoder, connector, and LLMs, which complicates the analysis. (4) The use of existing visual encoders typically requires following a pre-defined specification of image inputs pre-processing, for example, by reshaping inputs to fixed-resolution square images, which presents difficulties in processing and training on high-resolution images or those with unusual aspect ratio. A unified single Transformer architecture, like SOLO, effectively addresses these scalability concerns in LVLMs; however, its limited adoption in the modern context likely stems from the absence of reliable training recipes that balance both modalities and ensure stable training for billion-scale models. In this paper, we introduce the first open-source training recipe for developing SOLO, an open-source 7B LVLM using moderate academic resources. The training recipe involves initializing from LLMs, sequential pre-training on ImageNet and web-scale data, and instruction fine-tuning on our curated high-quality datasets. On extensive evaluation, SOLO demonstrates performance comparable to LLaVA-v1.5-7B, particularly excelling in visual mathematical reasoning. | [
"['Yangyi Chen' 'Xingyao Wang' 'Hao Peng' 'Heng Ji']"
] |
null | null | 2407.06447 | null | null | http://arxiv.org/pdf/2407.06447v1 | 2024-07-08T23:11:47Z | 2024-07-08T23:11:47Z | Geospatial Trajectory Generation via Efficient Abduction: Deployment for
Independent Testing | The ability to generate artificial human movement patterns while meeting location and time constraints is an important problem in the security community, particularly as it enables the study of the analog problem of detecting such patterns while maintaining privacy. We frame this problem as an instance of abduction guided by a novel parsimony function represented as an aggregate truth value over an annotated logic program. This approach has the added benefit of affording explainability to an analyst user. By showing that any subset of such a program can provide a lower bound on this parsimony requirement, we are able to abduce movement trajectories efficiently through an informed (i.e., A*) search. We describe how our implementation was enhanced with the application of multiple techniques in order to be scaled and integrated with a cloud-based software stack that included bottom-up rule learning, geolocated knowledge graph retrieval/management, and interfaces with government systems for independently conducted government-run tests for which we provide results. We also report on our own experiments showing that we not only provide exact results but also scale to very large scenarios and provide realistic agent trajectories that can go undetected by machine learning anomaly detectors. | [
"['Divyagna Bavikadi' 'Dyuman Aditya' 'Devendra Parkar' 'Paulo Shakarian'\n 'Graham Mueller' 'Chad Parvis' 'Gerardo I. Simari']"
] |
null | null | 2407.06459 | null | null | http://arxiv.org/pdf/2407.06459v1 | 2024-07-08T23:47:13Z | 2024-07-08T23:47:13Z | How Much Progress Did I Make? An Unexplored Human Feedback Signal for
Teaching Robots | Enhancing the expressiveness of human teaching is vital for both improving robots' learning from humans and the human-teaching-robot experience. In this work, we characterize and test a little-used teaching signal: textit{progress}, designed to represent the completion percentage of a task. We conducted two online studies with 76 crowd-sourced participants and one public space study with 40 non-expert participants to validate the capability of this progress signal. We find that progress indicates whether the task is successfully performed, reflects the degree of task completion, identifies unproductive but harmless behaviors, and is likely to be more consistent across participants. Furthermore, our results show that giving progress does not require extra workload and time. An additional contribution of our work is a dataset of 40 non-expert demonstrations from the public space study through an ice cream topping-adding task, which we observe to be multi-policy and sub-optimal, with sub-optimality not only from teleoperation errors but also from exploratory actions and attempts. The dataset is available at url{https://github.com/TeachingwithProgress/Non-Expert_Demonstrations}. | [
"['Hang Yu' 'Qidi Fang' 'Shijie Fang' 'Reuben M. Aronson'\n 'Elaine Schaertl Short']"
] |
null | null | 2407.06481 | null | null | http://arxiv.org/pdf/2407.06481v1 | 2024-07-09T01:08:21Z | 2024-07-09T01:08:21Z | Sinkhorn algorithms and linear programming solvers for optimal partial
transport problems | In this note, we generalize the classical optimal partial transport (OPT) problem by modifying the mass destruction/creation term to function-based terms, introducing what we term ``generalized optimal partial transport'' problems. We then discuss the dual formulation of these problems and the associated Sinkhorn solver. Finally, we explore how these new OPT problems relate to classical optimal transport (OT) problems and introduce a linear programming solver tailored for these generalized scenarios. | [
"['Yikun Bai']"
] |
null | null | 2407.06483 | null | null | http://arxiv.org/pdf/2407.06483v1 | 2024-07-09T01:17:44Z | 2024-07-09T01:17:44Z | Composable Interventions for Language Models | Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining. But despite a flood of new methods, different types of interventions are largely developing independently. In practice, multiple interventions must be applied sequentially to the same model, yet we lack standardized ways to study how interventions interact. We fill this gap by introducing composable interventions, a framework to study the effects of using multiple interventions on the same language models, featuring new metrics and a unified codebase. Using our framework, we conduct extensive experiments and compose popular methods from three emerging intervention categories -- Knowledge Editing, Model Compression, and Machine Unlearning. Our results from 310 different compositions uncover meaningful interactions: compression hinders editing and unlearning, composing interventions hinges on their order of application, and popular general-purpose metrics are inadequate for assessing composability. Taken together, our findings showcase clear gaps in composability, suggesting a need for new multi-objective interventions. All of our code is public: https://github.com/hartvigsen-group/composable-interventions. | [
"['Arinbjorn Kolbeinsson' \"Kyle O'Brien\" 'Tianjin Huang' 'Shanghua Gao'\n 'Shiwei Liu' 'Jonathan Richard Schwarz' 'Anurag Vaidya' 'Faisal Mahmood'\n 'Marinka Zitnik' 'Tianlong Chen' 'Thomas Hartvigsen']"
] |
null | null | 2407.06485 | null | null | http://arxiv.org/abs/2407.06485v1 | 2024-07-09T01:20:37Z | 2024-07-09T01:20:37Z | CrowdTransfer: Enabling Crowd Knowledge Transfer in AIoT Community | Artificial Intelligence of Things (AIoT) is an emerging frontier based on the deep fusion of Internet of Things (IoT) and Artificial Intelligence (AI) technologies. Although advanced deep learning techniques enhance the efficient data processing and intelligent analysis of complex IoT data, they still suffer from notable challenges when deployed to practical AIoT applications, such as constrained resources, and diverse task requirements. Knowledge transfer is an effective method to enhance learning performance by avoiding the exorbitant costs associated with data recollection and model retraining. Notably, although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances of various knowledge transfer techniques for AIoT field. This survey endeavors to introduce a new concept of knowledge transfer, referred to as Crowd Knowledge Transfer (CrowdTransfer), which aims to transfer prior knowledge learned from a crowd of agents to reduce the training cost and as well as improve the performance of the model in real-world complicated scenarios. Particularly, we present four transfer modes from the perspective of crowd intelligence, including derivation, sharing, evolution and fusion modes. Building upon conventional transfer learning methods, we further delve into advanced crowd knowledge transfer models from three perspectives for various AIoT applications. Furthermore, we explore some applications of AIoT areas, such as human activity recognition, urban computing, multi-robot system, and smart factory. Finally, we discuss the open issues and outline future research directions of knowledge transfer in AIoT community. | [
"['Yan Liu' 'Bin Guo' 'Nuo Li' 'Yasan Ding' 'Zhouyangzi Zhang' 'Zhiwen Yu']"
] |
null | null | 2407.06488 | null | null | http://arxiv.org/pdf/2407.06488v1 | 2024-07-09T01:27:35Z | 2024-07-09T01:27:35Z | Towards Understanding Multi-Task Learning (Generalization) of LLMs via
Detecting and Exploring Task-Specific Neurons | While large language models (LLMs) have demonstrated superior multi-task capabilities, understanding the learning mechanisms behind this is still a challenging problem. In this paper, we attempt to understand such mechanisms from the perspective of neurons. Specifically, we detect task-sensitive neurons in LLMs via gradient attribution on task-specific data. Through extensive deactivation and fine-tuning experiments, we demonstrate that the detected neurons are highly correlated with the given task, which we term as task-specific neurons. With these identified task-specific neurons, we delve into two common problems in multi-task learning and continuous learning: Generalization and Catastrophic Forgetting. We find that the overlap of task-specific neurons is strongly associated with generalization and specialization across tasks. Interestingly, at certain layers of LLMs, there is a high similarity in the parameters of different task-specific neurons, and such similarity is highly correlated with the generalization performance. Inspired by these findings, we propose a neuron-level continuous fine-tuning method that only fine-tunes the current task-specific neurons during continuous learning, and extensive experiments demonstrate the effectiveness of the proposed method. Our study provides insights into the interpretability of LLMs in multi-task learning. | [
"['Yongqi Leng' 'Deyi Xiong']"
] |
null | null | 2407.06494 | null | null | http://arxiv.org/pdf/2407.06494v1 | 2024-07-09T01:56:23Z | 2024-07-09T01:56:23Z | A Generative Approach to Control Complex Physical Systems | Controlling the evolution of complex physical systems is a fundamental task across science and engineering. Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and identify near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method in 1D Burgers' equation and 2D jellyfish movement control in a fluid environment. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics. | [
"['Long Wei' 'Peiyan Hu' 'Ruiqi Feng' 'Haodong Feng' 'Yixuan Du'\n 'Tao Zhang' 'Rui Wang' 'Yue Wang' 'Zhi-Ming Ma' 'Tailin Wu']"
] |
null | null | 2407.06496 | null | null | http://arxiv.org/pdf/2407.06496v1 | 2024-07-09T01:58:19Z | 2024-07-09T01:58:19Z | It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With
Non-Convex Loss | Differentially Private Stochastic Gradient Descent (DP-SGD) is a popular iterative algorithm used to train machine learning models while formally guaranteeing the privacy of users. However the privacy analysis of DP-SGD makes the unrealistic assumption that all intermediate iterates (aka internal state) of the algorithm are released since in practice, only the final trained model, i.e., the final iterate of the algorithm is released. In this hidden state setting, prior work has provided tighter analyses, albeit only when the loss function is constrained, e.g., strongly convex and smooth or linear. On the other hand, the privacy leakage observed empirically from hidden state DP-SGD, even when using non-convex loss functions suggest that there is in fact a gap between the theoretical privacy analysis and the privacy guarantees achieved in practice. Therefore, it remains an open question whether privacy amplification for DP-SGD is possible in the hidden state setting for general loss functions. Unfortunately, this work answers the aforementioned research question negatively. By carefully constructing a loss function for DP-SGD, we show that for specific loss functions, the final iterate of DP-SGD alone leaks as much information as the sequence of all iterates combined. Furthermore, we empirically verify this result by evaluating the privacy leakage from the final iterate of DP-SGD with our loss function and show that this matches the theoretical upper bound guaranteed by DP exactly. Therefore, we show that the current privacy analysis fo DP-SGD is tight for general loss functions and conclude that no privacy amplification is possible for DP-SGD in general for all (possibly non-convex) loss functions. | [
"['Meenatchi Sundaram Muthu Selva Annamalai']"
] |
null | null | 2407.06503 | null | null | http://arxiv.org/pdf/2407.06503v1 | 2024-07-09T02:11:12Z | 2024-07-09T02:11:12Z | Preference-Guided Reinforcement Learning for Efficient Exploration | In this paper, we investigate preference-based reinforcement learning (PbRL) that allows reinforcement learning (RL) agents to learn from human feedback. This is particularly valuable when defining a fine-grain reward function is not feasible. However, this approach is inefficient and impractical for promoting deep exploration in hard-exploration tasks with long horizons and sparse rewards. To tackle this issue, we introduce LOPE: Learning Online with trajectory Preference guidancE, an end-to-end preference-guided RL framework that enhances exploration efficiency in hard-exploration tasks. Our intuition is that LOPE directly adjusts the focus of online exploration by considering human feedback as guidance, avoiding learning a separate reward model from preferences. Specifically, LOPE includes a two-step sequential policy optimization process consisting of trust-region-based policy improvement and preference guidance steps. We reformulate preference guidance as a novel trajectory-wise state marginal matching problem that minimizes the maximum mean discrepancy distance between the preferred trajectories and the learned policy. Furthermore, we provide a theoretical analysis to characterize the performance improvement bound and evaluate the LOPE's effectiveness. When assessed in various challenging hard-exploration environments, LOPE outperforms several state-of-the-art methods regarding convergence rate and overall performance. The code used in this study is available at url{https://github.com/buaawgj/LOPE}. | [
"['Guojian Wang' 'Faguo Wu' 'Xiao Zhang' 'Tianyuan Chen' 'Xuyang Chen'\n 'Lin Zhao']"
] |
null | null | 2407.06507 | null | null | http://arxiv.org/pdf/2407.06507v1 | 2024-07-09T02:27:52Z | 2024-07-09T02:27:52Z | Economic span selection of bridge based on deep reinforcement learning | Deep Q-network algorithm is used to select economic span of bridge. Selection of bridge span has a significant impact on the total cost of bridge, and a reasonable selection of span can reduce engineering cost. Economic span of bridge is theoretically analyzed, and the theoretical solution formula of economic span is deduced. Construction process of bridge simulation environment is described in detail, including observation space, action space and reward function of the environment. Agent is constructed, convolutional neural network is used to approximate Q function,{epsilon} greedy policy is used for action selection, and experience replay is used for training. The test verifies that the agent can successfully learn optimal policy and realize economic span selection of bridge. This study provides a potential decision-making tool for bridge design. | [
"['Leye Zhang' 'Xiangxiang Tian' 'Chengli Zhang' 'Hongjun Zhang']"
] |
null | null | 2407.06518 | null | null | http://arxiv.org/pdf/2407.06518v1 | 2024-07-09T03:14:11Z | 2024-07-09T03:14:11Z | Graph Neural Networks and Deep Reinforcement Learning Based Resource
Allocation for V2X Communications | In the rapidly evolving landscape of Internet of Vehicles (IoV) technology, Cellular Vehicle-to-Everything (C-V2X) communication has attracted much attention due to its superior performance in coverage, latency, and throughput. Resource allocation within C-V2X is crucial for ensuring the transmission of safety information and meeting the stringent requirements for ultra-low latency and high reliability in Vehicle-to-Vehicle (V2V) communication. This paper proposes a method that integrates Graph Neural Networks (GNN) with Deep Reinforcement Learning (DRL) to address this challenge. By constructing a dynamic graph with communication links as nodes and employing the Graph Sample and Aggregation (GraphSAGE) model to adapt to changes in graph structure, the model aims to ensure a high success rate for V2V communication while minimizing interference on Vehicle-to-Infrastructure (V2I) links, thereby ensuring the successful transmission of V2V link information and maintaining high transmission rates for V2I links. The proposed method retains the global feature learning capabilities of GNN and supports distributed network deployment, allowing vehicles to extract low-dimensional features that include structural information from the graph network based on local observations and to make independent resource allocation decisions. Simulation results indicate that the introduction of GNN, with a modest increase in computational load, effectively enhances the decision-making quality of agents, demonstrating superiority to other methods. This study not only provides a theoretically efficient resource allocation strategy for V2V and V2I communications but also paves a new technical path for resource management in practical IoV environments. | [
"['Maoxin Ji' 'Qiong Wu' 'Pingyi Fan' 'Nan Cheng' 'Wen Chen'\n 'Jiangzhou Wang' 'Khaled B. Letaief']"
] |
null | null | 2407.06529 | null | null | http://arxiv.org/pdf/2407.06529v1 | 2024-07-09T03:59:06Z | 2024-07-09T03:59:06Z | Advanced Financial Fraud Detection Using GNN-CL Model | The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection by synergistically combining the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks. This convergence enables multifaceted analysis of complex transaction patterns, improving detection accuracy and resilience against complex fraudulent activities. A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity, effectively filtering out neighborhood noise that can lead to false positives. This intelligent purification mechanism ensures that only the most relevant information is considered, thereby improving the model's understanding of the network structure. Feature weakening often plagues graph-based models due to the dilution of key signals. In order to further address the challenge of feature weakening, GNN-CL adopts reinforcement learning strategies. By dynamically adjusting the weights assigned to central nodes, it reinforces the importance of these influential entities to retain important clues of fraud even in less informative data. Experimental evaluations on Yelp datasets show that the results highlight the superior performance of GNN-CL compared to existing methods. | [
"['Yu Cheng' 'Junjie Guo' 'Shiqing Long' 'You Wu' 'Mengfang Sun'\n 'Rong Zhang']"
] |
null | null | 2407.06533 | null | null | http://arxiv.org/pdf/2407.06533v1 | 2024-07-09T04:07:57Z | 2024-07-09T04:07:57Z | LETS-C: Leveraging Language Embedding for Time Series Classification | Recent advancements in language modeling have shown promising results when applied to time series data. In particular, fine-tuning pre-trained large language models (LLMs) for time series classification tasks has achieved state-of-the-art (SOTA) performance on standard benchmarks. However, these LLM-based models have a significant drawback due to the large model size, with the number of trainable parameters in the millions. In this paper, we propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a language embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks (CNN) and multilayer perceptron (MLP). We conducted extensive experiments on well-established time series classification benchmark datasets. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters on average compared to the SOTA model. Our findings suggest that leveraging language encoders to embed time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture. | [
"['Rachneet Kaur' 'Zhen Zeng' 'Tucker Balch' 'Manuela Veloso']"
] |
null | null | 2407.06543 | null | null | http://arxiv.org/abs/2407.06543v1 | 2024-07-09T04:38:44Z | 2024-07-09T04:38:44Z | DriftGAN: Using historical data for Unsupervised Recurring Drift
Detection | In real-world applications, input data distributions are rarely static over a period of time, a phenomenon known as concept drift. Such concept drifts degrade the model's prediction performance, and therefore we require methods to overcome these issues. The initial step is to identify concept drifts and have a training method in place to recover the model's performance. Most concept drift detection methods work on detecting concept drifts and signalling the requirement to retrain the model. However, in real-world cases, there could be concept drifts that recur over a period of time. In this paper, we present an unsupervised method based on Generative Adversarial Networks(GAN) to detect concept drifts and identify whether a specific concept drift occurred in the past. Our method reduces the time and data the model requires to get up to speed for recurring drifts. Our key results indicate that our proposed model can outperform the current state-of-the-art models in most datasets. We also test our method on a real-world use case from astrophysics, where we detect the bow shock and magnetopause crossings with better results than the existing methods in the domain. | [
"['Christofer Fellicious' 'Sahib Julka' 'Lorenz Wendlinger'\n 'Michael Granitzer']"
] |
null | null | 2407.06544 | null | null | http://arxiv.org/pdf/2407.06544v1 | 2024-07-09T04:51:22Z | 2024-07-09T04:51:22Z | Multiple Instance Verification | We explore multiple-instance verification, a problem setting where a query instance is verified against a bag of target instances with heterogeneous, unknown relevancy. We show that naive adaptations of attention-based multiple instance learning (MIL) methods and standard verification methods like Siamese neural networks are unsuitable for this setting: directly combining state-of-the-art (SOTA) MIL methods and Siamese networks is shown to be no better, and sometimes significantly worse, than a simple baseline model. Postulating that this may be caused by the failure of the representation of the target bag to incorporate the query instance, we introduce a new pooling approach named ``cross-attention pooling'' (CAP). Under the CAP framework, we propose two novel attention functions to address the challenge of distinguishing between highly similar instances in a target bag. Through empirical studies on three different verification tasks, we demonstrate that CAP outperforms adaptations of SOTA MIL methods and the baseline by substantial margins, in terms of both classification accuracy and quality of the explanations provided for the classifications. Ablation studies confirm the superior ability of the new attention functions to identify key instances. | [
"['Xin Xu' 'Eibe Frank' 'Geoffrey Holmes']"
] |
null | null | 2407.06549 | null | null | http://arxiv.org/pdf/2407.06549v1 | 2024-07-09T05:13:45Z | 2024-07-09T05:13:45Z | AutoTask: Task Aware Multi-Faceted Single Model for Multi-Task Ads
Relevance | Ads relevance models are crucial in determining the relevance between user search queries and ad offers, often framed as a classification problem. The complexity of modeling increases significantly with multiple ad types and varying scenarios that exhibit both similarities and differences. In this work, we introduce a novel multi-faceted attention model that performs task aware feature combination and cross task interaction modeling. Our technique formulates the feature combination problem as "language" modeling with auto-regressive attentions across both feature and task dimensions. Specifically, we introduce a new dimension of task ID encoding for task representations, thereby enabling precise relevance modeling across diverse ad scenarios with substantial improvement in generality capability for unseen tasks. We demonstrate that our model not only effectively handles the increased computational and maintenance demands as scenarios proliferate, but also outperforms generalized DNN models and even task-specific models across a spectrum of ad applications using a single unified model. | [
"['Shouchang Guo' 'Sonam Damani' 'Keng-hao Chang']"
] |
null | null | 2407.06608 | null | null | http://arxiv.org/pdf/2407.06608v1 | 2024-07-09T07:22:48Z | 2024-07-09T07:22:48Z | Iteratively Refined Image Reconstruction with Learned Attentive
Regularizers | We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance. | [
"['Mehrsa Pourya' 'Sebastian Neumayer' 'Michael Unser']"
] |
null | null | 2407.06612 | null | null | http://arxiv.org/pdf/2407.06612v1 | 2024-07-09T07:36:18Z | 2024-07-09T07:36:18Z | AI-based Automatic Segmentation of Prostate on Multi-modality Images: A
Review | Prostate cancer represents a major threat to health. Early detection is vital in reducing the mortality rate among prostate cancer patients. One approach involves using multi-modality (CT, MRI, US, etc.) computer-aided diagnosis (CAD) systems for the prostate region. However, prostate segmentation is challenging due to imperfections in the images and the prostate's complex tissue structure. The advent of precision medicine and a significant increase in clinical capacity have spurred the need for various data-driven tasks in the field of medical imaging. Recently, numerous machine learning and data mining tools have been integrated into various medical areas, including image segmentation. This article proposes a new classification method that differentiates supervision types, either in number or kind, during the training phase. Subsequently, we conducted a survey on artificial intelligence (AI)-based automatic prostate segmentation methods, examining the advantages and limitations of each. Additionally, we introduce variants of evaluation metrics for the verification and performance assessment of the segmentation method and summarize the current challenges. Finally, future research directions and development trends are discussed, reflecting the outcomes of our literature survey, suggesting high-precision detection and treatment of prostate cancer as a promising avenue. | [
"['Rui Jin' 'Derun Li' 'Dehui Xiang' 'Lei Zhang' 'Hailing Zhou' 'Fei Shi'\n 'Weifang Zhu' 'Jing Cai' 'Tao Peng' 'Xinjian Chen']"
] |
null | null | 2407.06637 | null | null | http://arxiv.org/pdf/2407.06637v1 | 2024-07-09T08:05:14Z | 2024-07-09T08:05:14Z | Early Detection of Network Service Degradation: An Intra-Flow Approach | This research presents a novel method for predicting service degradation (SD) in computer networks by leveraging early flow features. Our approach focuses on the observable (O) segments of network flows, particularly analyzing Packet Inter-Arrival Time (PIAT) values and other derived metrics, to infer the behavior of non-observable (NO) segments. Through a comprehensive evaluation, we identify an optimal O/NO split threshold of 10 observed delay samples, balancing prediction accuracy and resource utilization. Evaluating models including Logistic Regression, XGBoost, and Multi-Layer Perceptron, we find XGBoost outperforms others, achieving an F1-score of 0.74, balanced accuracy of 0.84, and AUROC of 0.97. Our findings highlight the effectiveness of incorporating comprehensive early flow features and the potential of our method to offer a practical solution for monitoring network traffic in resource-constrained environments. This approach ensures enhanced user experience and network performance by preemptively addressing potential SD, providing the basis for a robust framework for maintaining high-quality network services. | [
"['Balint Bicski' 'Adrian Pekar']"
] |
null | null | 2407.06645 | null | null | http://arxiv.org/pdf/2407.06645v3 | 2024-07-11T03:06:45Z | 2024-07-09T08:14:29Z | Entropy Law: The Story Behind Data Compression and LLM Performance | Data is the cornerstone of large language models (LLMs), but not all data is useful for model learning. Carefully selected data can better elicit the capabilities of LLMs with much less computational overhead. Most methods concentrate on evaluating the quality of individual samples in data selection, while the combinatorial effects among samples are neglected. Even if each sample is of perfect quality, their combinations may be suboptimal in teaching LLMs due to their intrinsic homogeneity or contradiction. In this paper, we aim to uncover the underlying relationships between LLM performance and data selection. Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively. Through both theoretical deduction and empirical evaluation, we find that model performance is negatively correlated to the compression ratio of training data, which usually yields a lower training loss. Based on the findings of the entropy law, we propose a quite efficient and universal data selection method named textbf{ZIP} for training LLMs, which aim to prioritize data subsets exhibiting a low compression ratio. Based on a multi-stage algorithm that selects diverse data in a greedy manner, we can obtain a good data subset with satisfactory diversity. Extensive experiments have been conducted to validate the entropy law and the superiority of ZIP across different LLM backbones and alignment stages. We also present an interesting application of entropy law that can detect potential performance risks at the beginning of model training. | [
"['Mingjia Yin' 'Chuhan Wu' 'Yufei Wang' 'Hao Wang' 'Wei Guo'\n 'Yasheng Wang' 'Yong Liu' 'Ruiming Tang' 'Defu Lian' 'Enhong Chen']"
] |
null | null | 2407.06646 | null | null | http://arxiv.org/pdf/2407.06646v1 | 2024-07-09T08:17:06Z | 2024-07-09T08:17:06Z | Variational Learning ISTA | Compressed sensing combines the power of convex optimization techniques with a sparsity-inducing prior on the signal space to solve an underdetermined system of equations. For many problems, the sparsifying dictionary is not directly given, nor its existence can be assumed. Besides, the sensing matrix can change across different scenarios. Addressing these issues requires solving a sparse representation learning problem, namely dictionary learning, taking into account the epistemic uncertainty of the learned dictionaries and, finally, jointly learning sparse representations and reconstructions under varying sensing matrix conditions. We address both concerns by proposing a variant of the LISTA architecture. First, we introduce Augmented Dictionary Learning ISTA (A-DLISTA), which incorporates an augmentation module to adapt parameters to the current measurement setup. Then, we propose to learn a distribution over dictionaries via a variational approach, dubbed Variational Learning ISTA (VLISTA). VLISTA exploits A-DLISTA as the likelihood model and approximates a posterior distribution over the dictionaries as part of an unfolded LISTA-based recovery algorithm. As a result, VLISTA provides a probabilistic way to jointly learn the dictionary distribution and the reconstruction algorithm with varying sensing matrices. We provide theoretical and experimental support for our architecture and show that our model learns calibrated uncertainties. | [
"['Fabio Valerio Massoli' 'Christos Louizos' 'Arash Behboodi']"
] |
null | null | 2407.06682 | null | null | http://arxiv.org/pdf/2407.06682v1 | 2024-07-09T08:59:27Z | 2024-07-09T08:59:27Z | A Predictive Model Based on Transformer with Statistical Feature
Embedding in Manufacturing Sensor Dataset | In the manufacturing process, sensor data collected from equipment is crucial for building predictive models to manage processes and improve productivity. However, in the field, it is challenging to gather sufficient data to build robust models. This study proposes a novel predictive model based on the Transformer, utilizing statistical feature embedding and window positional encoding. Statistical features provide an effective representation of sensor data, and the embedding enables the Transformer to learn both time- and sensor-related information. Window positional encoding captures precise time details from the feature embedding. The model's performance is evaluated in two problems: fault detection and virtual metrology, showing superior results compared to baseline models. This improvement is attributed to the efficient use of parameters, which is particularly beneficial for sensor data that often has limited sample sizes. The results support the model's applicability across various manufacturing industries, demonstrating its potential for enhancing process management and yield. | [
"['Gyeong Taek Lee' 'Oh-Ran Kwon']"
] |
null | null | 2407.06683 | null | null | http://arxiv.org/pdf/2407.06683v1 | 2024-07-09T08:59:27Z | 2024-07-09T08:59:27Z | Accelerating Online Mapping and Behavior Prediction via Direct BEV
Feature Attention | Understanding road geometry is a critical component of the autonomous vehicle (AV) stack. While high-definition (HD) maps can readily provide such information, they suffer from high labeling and maintenance costs. Accordingly, many recent works have proposed methods for estimating HD maps online from sensor data. The vast majority of recent approaches encode multi-camera observations into an intermediate representation, e.g., a bird's eye view (BEV) grid, and produce vector map elements via a decoder. While this architecture is performant, it decimates much of the information encoded in the intermediate representation, preventing downstream tasks (e.g., behavior prediction) from leveraging them. In this work, we propose exposing the rich internal features of online map estimation methods and show how they enable more tightly integrating online mapping with trajectory forecasting. In doing so, we find that directly accessing internal BEV features yields up to 73% faster inference speeds and up to 29% more accurate predictions on the real-world nuScenes dataset. | [
"['Xunjiang Gu' 'Guanyu Song' 'Igor Gilitschenski' 'Marco Pavone'\n 'Boris Ivanovic']"
] |
null | null | 2407.06690 | null | null | http://arxiv.org/pdf/2407.06690v1 | 2024-07-09T09:06:44Z | 2024-07-09T09:06:44Z | Hierarchical Average-Reward Linearly-solvable Markov Decision Processes | We introduce a novel approach to hierarchical reinforcement learning for Linearly-solvable Markov Decision Processes (LMDPs) in the infinite-horizon average-reward setting. Unlike previous work, our approach allows learning low-level and high-level tasks simultaneously, without imposing limiting restrictions on the low-level tasks. Our method relies on partitions of the state space that create smaller subtasks that are easier to solve, and the equivalence between such partitions to learn more efficiently. We then exploit the compositionality of low-level tasks to exactly represent the value function of the high-level task. Experiments show that our approach can outperform flat average-reward reinforcement learning by one or several orders of magnitude. | [
"['Guillermo Infante' 'Anders Jonsson' 'Vicenç Gómez']"
] |
null | null | 2407.06697 | null | null | http://arxiv.org/pdf/2407.06697v1 | 2024-07-09T09:14:45Z | 2024-07-09T09:14:45Z | Certified Continual Learning for Neural Network Regression | On the one hand, there has been considerable progress on neural network verification in recent years, which makes certifying neural networks a possibility. On the other hand, neural networks in practice are often re-trained over time to cope with new data distribution or for solving different tasks (a.k.a. continual learning). Once re-trained, the verified correctness of the neural network is likely broken, particularly in the presence of the phenomenon known as catastrophic forgetting. In this work, we propose an approach called certified continual learning which improves existing continual learning methods by preserving, as long as possible, the established correctness properties of a verified network. Our approach is evaluated with multiple neural networks and on two different continual learning methods. The results show that our approach is efficient and the trained models preserve their certified correctness and often maintain high utility. | [
"['Long H. Pham' 'Jun Sun']"
] |
null | null | 2407.06698 | null | null | http://arxiv.org/pdf/2407.06698v1 | 2024-07-09T09:19:01Z | 2024-07-09T09:19:01Z | PSPU: Enhanced Positive and Unlabeled Learning by Leveraging Pseudo
Supervision | Positive and Unlabeled (PU) learning, a binary classification model trained with only positive and unlabeled data, generally suffers from overfitted risk estimation due to inconsistent data distributions. To address this, we introduce a pseudo-supervised PU learning framework (PSPU), in which we train the PU model first, use it to gather confident samples for the pseudo supervision, and then apply these supervision to correct the PU model's weights by leveraging non-PU objectives. We also incorporate an additional consistency loss to mitigate noisy sample effects. Our PSPU outperforms recent PU learning methods significantly on MNIST, CIFAR-10, CIFAR-100 in both balanced and imbalanced settings, and enjoys competitive performance on MVTecAD for industrial anomaly detection. | [
"['Chengjie Wang' 'Chengming Xu' 'Zhenye Gan' 'Jianlong Hu' 'Wenbing Zhu'\n 'Lizhuag Ma']"
] |
null | null | 2407.06703 | null | null | http://arxiv.org/pdf/2407.06703v1 | 2024-07-09T09:31:05Z | 2024-07-09T09:31:05Z | HERMES: Holographic Equivariant neuRal network model for Mutational
Effect and Stability prediction | Predicting the stability and fitness effects of amino acid mutations in proteins is a cornerstone of biological discovery and engineering. Various experimental techniques have been developed to measure mutational effects, providing us with extensive datasets across a diverse range of proteins. By training on these data, traditional computational modeling and more recent machine learning approaches have advanced significantly in predicting mutational effects. Here, we introduce HERMES, a 3D rotationally equivariant structure-based neural network model for mutational effect and stability prediction. Pre-trained to predict amino acid propensity from its surrounding 3D structure, HERMES can be fine-tuned for mutational effects using our open-source code. We present a suite of HERMES models, pre-trained with different strategies, and fine-tuned to predict the stability effect of mutations. Benchmarking against other models shows that HERMES often outperforms or matches their performance in predicting mutational effect on stability, binding, and fitness. HERMES offers versatile tools for evaluating mutational effects and can be fine-tuned for specific predictive objectives. | [
"['Gian Marco Visani' 'Michael N. Pun' 'William Galvin' 'Eric Daniel'\n 'Kevin Borisiak' 'Utheri Wagura' 'Armita Nourmohammad']"
] |
null | null | 2407.06704 | null | null | http://arxiv.org/pdf/2407.06704v1 | 2024-07-09T09:31:15Z | 2024-07-09T09:31:15Z | Self-supervised visual learning from interactions with objects | Self-supervised learning (SSL) has revolutionized visual representation learning, but has not achieved the robustness of human vision. A reason for this could be that SSL does not leverage all the data available to humans during learning. When learning about an object, humans often purposefully turn or move around objects and research suggests that these interactions can substantially enhance their learning. Here we explore whether such object-related actions can boost SSL. For this, we extract the actions performed to change from one ego-centric view of an object to another in four video datasets. We then introduce a new loss function to learn visual and action embeddings by aligning the performed action with the representations of two images extracted from the same clip. This permits the performed actions to structure the latent visual representation. Our experiments show that our method consistently outperforms previous methods on downstream category recognition. In our analysis, we find that the observed improvement is associated with a better viewpoint-wise alignment of different objects from the same category. Overall, our work demonstrates that embodied interactions with objects can improve SSL of object categories. | [
"['Arthur Aubret' 'Céline Teulière' 'Jochen Triesch']"
] |
null | null | 2407.06709 | null | null | http://arxiv.org/abs/2407.06709v1 | 2024-07-09T09:36:37Z | 2024-07-09T09:36:37Z | Top-K Pairwise Ranking: Bridging the Gap Among Ranking-Based Measures
for Multi-Label Classification | Multi-label ranking, which returns multiple top-ranked labels for each instance, has a wide range of applications for visual tasks. Due to its complicated setting, prior arts have proposed various measures to evaluate model performances. However, both theoretical analysis and empirical observations show that a model might perform inconsistently on different measures. To bridge this gap, this paper proposes a novel measure named Top-K Pairwise Ranking (TKPR), and a series of analyses show that TKPR is compatible with existing ranking-based measures. In light of this, we further establish an empirical surrogate risk minimization framework for TKPR. On one hand, the proposed framework enjoys convex surrogate losses with the theoretical support of Fisher consistency. On the other hand, we establish a sharp generalization bound for the proposed framework based on a novel technique named data-dependent contraction. Finally, empirical results on benchmark datasets validate the effectiveness of the proposed framework. | [
"['Zitai Wang' 'Qianqian Xu' 'Zhiyong Yang' 'Peisong Wen' 'Yuan He'\n 'Xiaochun Cao' 'Qingming Huang']"
] |
null | null | 2407.06712 | null | null | http://arxiv.org/pdf/2407.06712v1 | 2024-07-09T09:39:45Z | 2024-07-09T09:39:45Z | MDP Geometry, Normalization and Value Free Solvers | Markov Decision Process (MDP) is a common mathematical model for sequential decision-making problems. In this paper, we present a new geometric interpretation of MDP, which is useful for analyzing the dynamics of main MDP algorithms. Based on this interpretation, we demonstrate that MDPs can be split into equivalence classes with indistinguishable algorithm dynamics. The related normalization procedure allows for the design of a new class of MDP-solving algorithms that find optimal policies without computing policy values. | [
"['Arsenii Mustafin' 'Aleksei Pakharev' 'Alex Olshevsky'\n 'Ioannis Ch. Paschalidis']"
] |
null | null | 2407.06723 | null | null | http://arxiv.org/pdf/2407.06723v1 | 2024-07-09T09:55:04Z | 2024-07-09T09:55:04Z | Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting
Region Captions | Humans describe complex scenes with compositionality, using simple text descriptions enriched with links and relationships. While vision-language research has aimed to develop models with compositional understanding capabilities, this is not reflected yet in existing datasets which, for the most part, still use plain text to describe images. In this work, we propose a new annotation strategy, graph-based captioning (GBC) that describes an image using a labelled graph structure, with nodes of various types. The nodes in GBC are created using, in a first stage, object detection and dense captioning tools nested recursively to uncover and describe entity nodes, further linked together in a second stage by highlighting, using new types of nodes, compositions and relations among entities. Since all GBC nodes hold plain text descriptions, GBC retains the flexibility found in natural language, but can also encode hierarchical information in its edges. We demonstrate that GBC can be produced automatically, using off-the-shelf multimodal LLMs and open-vocabulary detection models, by building a new dataset, GBC10M, gathering GBC annotations for about 10M images of the CC12M dataset. We use GBC10M to showcase the wealth of node captions uncovered by GBC, as measured with CLIP training. We show that using GBC nodes' annotations -- notably those stored in composition and relation nodes -- results in significant performance boost on downstream models when compared to other dataset formats. To further explore the opportunities provided by GBC, we also propose a new attention mechanism that can leverage the entire GBC graph, with encouraging experimental results that show the extra benefits of incorporating the graph structure. Our datasets are released at url{https://huggingface.co/graph-based-captions}. | [
"['Yu-Guan Hsieh' 'Cheng-Yu Hsieh' 'Shih-Ying Yeh' 'Louis Béthune'\n 'Hadi Pour Ansari' 'Pavan Kumar Anasosalu Vasu' 'Chun-Liang Li'\n 'Ranjay Krishna' 'Oncel Tuzel' 'Marco Cuturi']"
] |
null | null | 2407.06740 | null | null | http://arxiv.org/pdf/2407.06740v1 | 2024-07-09T10:40:31Z | 2024-07-09T10:40:31Z | Positive-Unlabelled Learning for Improving Image-based Recommender
System Explainability | Among the existing approaches for visual-based Recommender System (RS) explainability, utilizing user-uploaded item images as efficient, trustable explanations is a promising option. However, current models following this paradigm assume that, for any user, all images uploaded by other users can be considered negative training examples (i.e. bad explanatory images), an inadvertedly naive labelling assumption that contradicts the rationale of the approach. This work proposes a new explainer training pipeline by leveraging Positive-Unlabelled (PU) Learning techniques to train image-based explainer with refined subsets of reliable negative examples for each user selected through a novel user-personalized, two-step, similarity-based PU Learning algorithm. Computational experiments show this PU-based approach outperforms the state-of-the-art non-PU method in six popular real-world datasets, proving that an improvement of visual-based RS explainability can be achieved by maximizing training data quality rather than increasing model complexity. | [
"['Álvaro Fernández-Campa-González' 'Jorge Paz-Ruza'\n 'Amparo Alonso-Betanzos' 'Bertha Guijarro-Berdiñas']"
] |
null | null | 2407.06756 | null | null | http://arxiv.org/pdf/2407.06756v1 | 2024-07-09T11:07:41Z | 2024-07-09T11:07:41Z | Frequency and Generalisation of Periodic Activation Functions in
Reinforcement Learning | Periodic activation functions, often referred to as learned Fourier features have been widely demonstrated to improve sample efficiency and stability in a variety of deep RL algorithms. Potentially incompatible hypotheses have been made about the source of these improvements. One is that periodic activations learn low frequency representations and as a result avoid overfitting to bootstrapped targets. Another is that periodic activations learn high frequency representations that are more expressive, allowing networks to quickly fit complex value functions. We analyse these claims empirically, finding that periodic representations consistently converge to high frequencies regardless of their initialisation frequency. We also find that while periodic activation functions improve sample efficiency, they exhibit worse generalization on states with added observation noise -- especially when compared to otherwise equivalent networks with ReLU activation functions. Finally, we show that weight decay regularization is able to partially offset the overfitting of periodic activation functions, delivering value functions that learn quickly while also generalizing. | [
"['Augustine N. Mavor-Parker' 'Matthew J. Sargent' 'Caswell Barry'\n 'Lewis Griffin' 'Clare Lyle']"
] |
null | null | 2407.06765 | null | null | http://arxiv.org/pdf/2407.06765v1 | 2024-07-09T11:20:01Z | 2024-07-09T11:20:01Z | A Generalization Bound for Nearly-Linear Networks | We consider nonlinear networks as perturbations of linear ones. Based on this approach, we present novel generalization bounds that become non-vacuous for networks that are close to being linear. The main advantage over the previous works which propose non-vacuous generalization bounds is that our bounds are a-priori: performing the actual training is not required for evaluating the bounds. To the best of our knowledge, they are the first non-vacuous generalization bounds for neural nets possessing this property. | [
"['Eugene Golikov']"
] |
null | null | 2407.06771 | null | null | http://arxiv.org/pdf/2407.06771v1 | 2024-07-09T11:40:46Z | 2024-07-09T11:40:46Z | Temporal Convolution Derived Multi-Layered Reservoir Computing | The prediction of time series is a challenging task relevant in such diverse applications as analyzing financial data, forecasting flow dynamics or understanding biological processes. Especially chaotic time series that depend on a long history pose an exceptionally difficult problem. While machine learning has shown to be a promising approach for predicting such time series, it either demands long training time and much training data when using deep recurrent neural networks. Alternative, when using a reservoir computing approach it comes with high uncertainty and typically a high number of random initializations and extensive hyper-parameter tuning when using a reservoir computing approach. In this paper, we focus on the reservoir computing approach and propose a new mapping of input data into the reservoir's state space. Furthermore, we incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network while reducing the dependence on randomness. For the evaluation, we approximate a set of time series from the Mackey-Glass equation, inhabiting non-chaotic as well as chaotic behavior and compare our approaches in regard to their predictive capabilities to echo state networks and gated recurrent units. For the chaotic time series, we observe an error reduction of up to $85.45%$ and up to $87.90%$ in contrast to echo state networks and gated recurrent units respectively. Furthermore, we also observe tremendous improvements for non-chaotic time series of up to $99.99%$ in contrast to existing approaches. | [
"['Johannes Viehweg' 'Dominik Walther' 'Prof. Dr. -Ing. Patrick Mäder']"
] |
null | null | 2407.06783 | null | null | http://arxiv.org/pdf/2407.06783v1 | 2024-07-09T11:54:34Z | 2024-07-09T11:54:34Z | Convergence rates for Poisson learning to a Poisson equation with
measure data | In this paper we prove discrete to continuum convergence rates for Poisson Learning, a graph-based semi-supervised learning algorithm that is based on solving the graph Poisson equation with a source term consisting of a linear combination of Dirac deltas located at labeled points and carrying label information. The corresponding continuum equation is a Poisson equation with measure data in a Euclidean domain $Omega subset mathbb{R}^d$. The singular nature of these equations is challenging and requires an approach with several distinct parts: (1) We prove quantitative error estimates when convolving the measure data of a Poisson equation with (approximately) radial function supported on balls. (2) We use quantitative variational techniques to prove discrete to continuum convergence rates on random geometric graphs with bandwidth $varepsilon>0$ for bounded source terms. (3) We show how to regularize the graph Poisson equation via mollification with the graph heat kernel, and we study fine asymptotics of the heat kernel on random geometric graphs. Combining these three pillars we obtain $L^1$ convergence rates that scale, up to logarithmic factors, like $O(varepsilon^{frac{1}{d+2}})$ for general data distributions, and $O(varepsilon^{frac{2-sigma}{d+4}})$ for uniformly distributed data, where $sigma>0$. These rates are valid with high probability if $varepsilonggleft({log n}/{n}right)^q$ where $n$ denotes the number of vertices of the graph and $q approx frac{1}{3d}$. | [
"['Leon Bungert' 'Jeff Calder' 'Max Mihailescu' 'Kodjo Houssou'\n 'Amber Yuan']"
] |
null | null | 2407.06785 | null | null | http://arxiv.org/pdf/2407.06785v1 | 2024-07-09T11:54:49Z | 2024-07-09T11:54:49Z | Towards physics-informed neural networks for landslide prediction | For decades, solutions to regional scale landslide prediction have mostly relied on data-driven models, by definition, disconnected from the physics of the failure mechanism. The success and spread of such tools came from the ability to exploit proxy variables rather than explicit geotechnical ones, as the latter are prohibitive to acquire over broad landscapes. Our work implements a Physics Informed Neural Network (PINN) approach, thereby adding to a standard data-driven architecture, an intermediate constraint to solve for the permanent deformation typical of Newmark slope stability methods. This translates into a neural network tasked with explicitly retrieving geotechnical parameters from common proxy variables and then minimize a loss function with respect to the available coseismic landside inventory. The results are very promising, because our model not only produces excellent predictive performance in the form of standard susceptibility output, but in the process, also generates maps of the expected geotechnical properties at a regional scale. Such architecture is therefore framed to tackle coseismic landslide prediction, something that, if confirmed in other studies, could open up towards PINN-based near-real-time predictions. | [
"['Ashok Dahal' 'Luigi Lombardo']"
] |
null | null | 2407.06797 | null | null | http://arxiv.org/pdf/2407.06797v1 | 2024-07-09T12:09:21Z | 2024-07-09T12:09:21Z | ED-VAE: Entropy Decomposition of ELBO in Variational Autoencoders | Traditional Variational Autoencoders (VAEs) are constrained by the limitations of the Evidence Lower Bound (ELBO) formulation, particularly when utilizing simplistic, non-analytic, or unknown prior distributions. These limitations inhibit the VAE's ability to generate high-quality samples and provide clear, interpretable latent representations. This work introduces the Entropy Decomposed Variational Autoencoder (ED-VAE), a novel re-formulation of the ELBO that explicitly includes entropy and cross-entropy components. This reformulation significantly enhances model flexibility, allowing for the integration of complex and non-standard priors. By providing more detailed control over the encoding and regularization of latent spaces, ED-VAE not only improves interpretability but also effectively captures the complex interactions between latent variables and observed data, thus leading to better generative performance. | [
"['Fotios Lygerakis' 'Elmar Rueckert']"
] |
null | null | 2407.06800 | null | null | http://arxiv.org/pdf/2407.06800v1 | 2024-07-09T12:14:48Z | 2024-07-09T12:14:48Z | Learn and Don't Forget: Adding a New Language to ASR Foundation Models | Foundation ASR models often support many languages, e.g. 100 languages in Whisper. However, there has been limited work on integrating an additional, typically low-resource, language, while maintaining performance on the original language set. Fine-tuning, while simple, may degrade the accuracy of the original set. We compare three approaches that exploit adaptation parameters: soft language code tuning, train only the language code; soft prompt tuning, train prepended tokens; and LoRA where a small set of additional parameters are optimised. Elastic Weight Consolidation (EWC) offers an alternative compromise with the potential to maintain performance in specific target languages. Results show that direct fine-tuning yields the best performance for the new language but degrades existing language capabilities. EWC can address this issue for specific languages. If only adaptation parameters are used, the language capabilities are maintained but at the cost of performance in the new language. | [
"['Mengjie Qian' 'Siyuan Tang' 'Rao Ma' 'Kate M. Knill' 'Mark J. F. Gales']"
] |
null | null | 2407.06849 | null | null | http://arxiv.org/pdf/2407.06849v1 | 2024-07-09T13:32:33Z | 2024-07-09T13:32:33Z | TeVAE: A Variational Autoencoder Approach for Discrete Online Anomaly
Detection in Variable-state Multivariate Time-series Data | As attention to recorded data grows in the realm of automotive testing and manual evaluation reaches its limits, there is a growing need for automatic online anomaly detection. This real-world data is complex in many ways and requires the modelling of testee behaviour. To address this, we propose a temporal variational autoencoder (TeVAE) that can detect anomalies with minimal false positives when trained on unlabelled data. Our approach also avoids the bypass phenomenon and introduces a new method to remap individual windows to a continuous time series. Furthermore, we propose metrics to evaluate the detection delay and root-cause capability of our approach and present results from experiments on a real-world industrial data set. When properly configured, TeVAE flags anomalies only 6% of the time wrongly and detects 65% of anomalies present. It also has the potential to perform well with a smaller training and validation subset but requires a more sophisticated threshold estimation method. | [
"['Lucas Correia' 'Jan-Christoph Goos' 'Philipp Klein' 'Thomas Bäck'\n 'Anna V. Kononova']"
] |
null | null | 2407.06855 | null | null | http://arxiv.org/pdf/2407.06855v1 | 2024-07-09T13:42:14Z | 2024-07-09T13:42:14Z | Performance Evaluation of Knowledge Graph Embedding Approaches under
Non-adversarial Attacks | Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG) into a continuous vector space facilitating its use in various AI-driven applications like Semantic Search, Question Answering, or Recommenders. While KGE approaches are effective in these applications, most existing approaches assume that all information in the given KG is correct. This enables attackers to influence the output of these approaches, e.g., by perturbing the input. Consequently, the robustness of such KGE approaches has to be addressed. Recent work focused on adversarial attacks. However, non-adversarial attacks on all attack surfaces of these approaches have not been thoroughly examined. We close this gap by evaluating the impact of non-adversarial attacks on the performance of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3 attack surfaces-graph, parameter, and label perturbation. Our evaluation results suggest that label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect. | [
"['Sourabh Kapoor' 'Arnab Sharma' 'Michael Röder' 'Caglar Demir'\n 'Axel-Cyrille Ngonga Ngomo']"
] |
null | null | 2407.06862 | null | null | http://arxiv.org/pdf/2407.06862v1 | 2024-07-09T13:50:32Z | 2024-07-09T13:50:32Z | Trust and Resilience in Federated Learning Through Smart Contracts
Enabled Decentralized Systems | In this paper, we present a study of a Federated Learning (FL) system, based on the use of decentralized architectures to ensure trust and increase reliability. The system is based on the idea that the FL collaborators upload the (ciphered) model parameters on the Inter-Planetary File System (IPFS) and interact with a dedicated smart contract to track their behavior. Thank to this smart contract, the phases of parameter updates are managed efficiently, thereby strengthening data security. We have carried out an experimental study that exploits two different methods of weight aggregation, i.e., a classic averaging scheme and a federated proximal aggregation. The results confirm the feasibility of the proposal. | [
"['Lorenzo Cassano' \"Jacopo D'Abramo\" 'Siraj Munir' 'Stefano Ferretti']"
] |
null | null | 2407.06868 | null | null | http://arxiv.org/pdf/2407.06868v1 | 2024-07-09T13:56:59Z | 2024-07-09T13:56:59Z | Energy Efficient Fair STAR-RIS for Mobile Users | In this work, we propose a method to improve the energy efficiency and fairness of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RIS) for mobile users, ensuring reduced power consumption while maintaining reliable communication. To achieve this, we introduce a new parameter known as the subsurface assignment variable, which determines the number of STAR-RIS elements allocated to each user. We then formulate a novel optimization problem by concurrently optimizing the phase shifts of the STAR-RIS and subsurface assignment variable. We leverage the deep reinforcement learning (DRL) technique to address this optimization problem. The DRL model predicts the phase shifts of the STAR-RIS and efficiently allocates elements of STAR-RIS to the users. Additionally, we incorporate a penalty term in the DRL model to facilitate intelligent deactivation of STAR-RIS elements when not in use to enhance energy efficiency. Through extensive experiments, we show that the proposed method can achieve fairly high and nearly equal data rates for all users in both the transmission and reflection spaces in an energy-efficient manner. | [
"['Ashok S. Kumar' 'Nancy Nayak' 'Sheetal Kalyani' 'Himal A. Suraweera']"
] |
null | null | 2407.06886 | null | null | http://arxiv.org/pdf/2407.06886v2 | 2024-07-12T01:48:00Z | 2024-07-09T14:14:47Z | Aligning Cyber Space with Physical World: A Comprehensive Survey on
Embodied AI | Embodied Artificial Intelligence (Embodied AI) is crucial for achieving Artificial General Intelligence (AGI) and serves as a foundation for various applications that bridge cyberspace and the physical world. Recently, the emergence of Multi-modal Large Models (MLMs) and World Models (WMs) have attracted significant attention due to their remarkable perception, interaction, and reasoning capabilities, making them a promising architecture for the brain of embodied agents. However, there is no comprehensive survey for Embodied AI in the era of MLMs. In this survey, we give a comprehensive exploration of the latest advancements in Embodied AI. Our analysis firstly navigates through the forefront of representative works of embodied robots and simulators, to fully understand the research focuses and their limitations. Then, we analyze four main research targets: 1) embodied perception, 2) embodied interaction, 3) embodied agent, and 4) sim-to-real adaptation, covering the state-of-the-art methods, essential paradigms, and comprehensive datasets. Additionally, we explore the complexities of MLMs in virtual and real embodied agents, highlighting their significance in facilitating interactions in dynamic digital and physical environments. Finally, we summarize the challenges and limitations of embodied AI and discuss their potential future directions. We hope this survey will serve as a foundational reference for the research community and inspire continued innovation. The associated project can be found at https://github.com/HCPLab-SYSU/Embodied_AI_Paper_List. | [
"['Yang Liu' 'Weixing Chen' 'Yongjie Bai' 'Jingzhou Luo' 'Xinshuai Song'\n 'Kaixuan Jiang' 'Zhida Li' 'Ganlong Zhao' 'Junyi Lin' 'Guanbin Li'\n 'Wen Gao' 'Liang Lin']"
] |
null | null | 2407.06888 | null | null | http://arxiv.org/pdf/2407.06888v1 | 2024-07-09T14:18:30Z | 2024-07-09T14:18:30Z | A Complete Set of Quadratic Constraints For Repeated ReLU | This paper derives a complete set of quadratic constraints (QCs) for the repeated ReLU. The complete set of QCs is described by a collection of $2^{n_v}$ matrix copositivity conditions where $n_v$ is the dimension of the repeated ReLU. We also show that only two functions satisfy all QCs in our complete set: the repeated ReLU and a repeated "flipped" ReLU. Thus our complete set of QCs bounds the repeated ReLU as tight as possible up to the sign invariance inherent in quadratic forms. We derive a similar complete set of incremental QCs for repeated ReLU, which can potentially lead to less conservative Lipschitz bounds for ReLU networks than the standard LipSDP approach. Finally, we illustrate the use of the complete set of QCs to assess stability and performance for recurrent neural networks with ReLU activation functions. The stability/performance condition combines Lyapunov/dissipativity theory with the QCs for repeated ReLU. A numerical implementation is given and demonstrated via a simple example. | [
"['Sahel Vahedi Noori' 'Bin Hu' 'Geir Dullerud' 'Peter Seiler']"
] |
null | null | 2407.06902 | null | null | http://arxiv.org/pdf/2407.06902v1 | 2024-07-09T14:34:40Z | 2024-07-09T14:34:40Z | Learning From Crowdsourced Noisy Labels: A Signal Processing Perspective | One of the primary catalysts fueling advances in artificial intelligence (AI) and machine learning (ML) is the availability of massive, curated datasets. A commonly used technique to curate such massive datasets is crowdsourcing, where data are dispatched to multiple annotators. The annotator-produced labels are then fused to serve downstream learning and inference tasks. This annotation process often creates noisy labels due to various reasons, such as the limited expertise, or unreliability of annotators, among others. Therefore, a core objective in crowdsourcing is to develop methods that effectively mitigate the negative impact of such label noise on learning tasks. This feature article introduces advances in learning from noisy crowdsourced labels. The focus is on key crowdsourcing models and their methodological treatments, from classical statistical models to recent deep learning-based approaches, emphasizing analytical insights and algorithmic developments. In particular, this article reviews the connections between signal processing (SP) theory and methods, such as identifiability of tensor and nonnegative matrix factorization, and novel, principled solutions of longstanding challenges in crowdsourcing -- showing how SP perspectives drive the advancements of this field. Furthermore, this article touches upon emerging topics that are critical for developing cutting-edge AI/ML systems, such as crowdsourcing in reinforcement learning with human feedback (RLHF) and direct preference optimization (DPO) that are key techniques for fine-tuning large language models (LLMs). | [
"['Shahana Ibrahim' 'Panagiotis A. Traganitis' 'Xiao Fu'\n 'Georgios B. Giannakis']"
] |
null | null | 2407.06909 | null | null | http://arxiv.org/pdf/2407.06909v1 | 2024-07-09T14:45:47Z | 2024-07-09T14:45:47Z | Intercepting Unauthorized Aerial Robots in Controlled Airspace Using
Reinforcement Learning | The proliferation of unmanned aerial vehicles (UAVs) in controlled airspace presents significant risks, including potential collisions, disruptions to air traffic, and security threats. Ensuring the safe and efficient operation of airspace, particularly in urban environments and near critical infrastructure, necessitates effective methods to intercept unauthorized or non-cooperative UAVs. This work addresses the critical need for robust, adaptive systems capable of managing such threats through the use of Reinforcement Learning (RL). We present a novel approach utilizing RL to train fixed-wing UAV pursuer agents for intercepting dynamic evader targets. Our methodology explores both model-based and model-free RL algorithms, specifically DreamerV3, Truncated Quantile Critics (TQC), and Soft Actor-Critic (SAC). The training and evaluation of these algorithms were conducted under diverse scenarios, including unseen evasion strategies and environmental perturbations. Our approach leverages high-fidelity flight dynamics simulations to create realistic training environments. This research underscores the importance of developing intelligent, adaptive control systems for UAV interception, significantly contributing to the advancement of secure and efficient airspace management. It demonstrates the potential of RL to train systems capable of autonomously achieving these critical tasks. | [
"['Francisco Giral' 'Ignacio Gómez' 'Soledad Le Clainche']"
] |
null | null | 2407.06910 | null | null | http://arxiv.org/pdf/2407.06910v1 | 2024-07-09T14:46:09Z | 2024-07-09T14:46:09Z | Fine-grained large-scale content recommendations for MSX sellers | One of the most critical tasks of Microsoft sellers is to meticulously track and nurture potential business opportunities through proactive engagement and tailored solutions. Recommender systems play a central role to help sellers achieve their goals. In this paper, we present a content recommendation model which surfaces various types of content (technical documentation, comparison with competitor products, customer success stories etc.) that sellers can share with their customers or use for their own self-learning. The model operates at the opportunity level which is the lowest possible granularity and the most relevant one for sellers. It is based on semantic matching between metadata from the contents and carefully selected attributes of the opportunities. Considering the volume of seller-managed opportunities in organizations such as Microsoft, we show how to perform efficient semantic matching over a very large number of opportunity-content combinations. The main challenge is to ensure that the top-5 relevant contents for each opportunity are recommended out of a total of $approx 40,000$ published contents. We achieve this target through an extensive comparison of different model architectures and feature selection. Finally, we further examine the quality of the recommendations in a quantitative manner using a combination of human domain experts as well as by using the recently proposed "LLM as a judge" framework. | [
"['Manpreet Singh' 'Ravdeep Pasricha' 'Ravi Prasad Kondapalli' 'Kiran R'\n 'Nitish Singh' 'Akshita Agarwalla' 'Manoj R' 'Manish Prabhakar'\n 'Laurent Boué']"
] |
null | null | 2407.06935 | null | null | http://arxiv.org/pdf/2407.06935v1 | 2024-07-09T15:10:59Z | 2024-07-09T15:10:59Z | Bayesian Federated Learning with Hamiltonian Monte Carlo: Algorithm and
Theory | This work introduces a novel and efficient Bayesian federated learning algorithm, namely, the Federated Averaging stochastic Hamiltonian Monte Carlo (FA-HMC), for parameter estimation and uncertainty quantification. We establish rigorous convergence guarantees of FA-HMC on non-iid distributed data sets, under the strong convexity and Hessian smoothness assumptions. Our analysis investigates the effects of parameter space dimension, noise on gradients and momentum, and the frequency of communication (between the central node and local nodes) on the convergence and communication costs of FA-HMC. Beyond that, we establish the tightness of our analysis by showing that the convergence rate cannot be improved even for continuous FA-HMC process. Moreover, extensive empirical studies demonstrate that FA-HMC outperforms the existing Federated Averaging-Langevin Monte Carlo (FA-LD) algorithm. | [
"['Jiajun Liang' 'Qian Zhang' 'Wei Deng' 'Qifan Song' 'Guang Lin']"
] |
null | null | 2407.06946 | null | null | http://arxiv.org/pdf/2407.06946v1 | 2024-07-09T15:23:28Z | 2024-07-09T15:23:28Z | Self-Recognition in Language Models | A rapidly growing number of applications rely on a small set of closed-source language models (LMs). This dependency might introduce novel security risks if LMs develop self-recognition capabilities. Inspired by human identity verification methods, we propose a novel approach for assessing self-recognition in LMs using model-generated "security questions". Our test can be externally administered to keep track of frontier models as it does not require access to internal model parameters or output probabilities. We use our test to examine self-recognition in ten of the most capable open- and closed-source LMs currently publicly available. Our extensive experiments found no empirical evidence of general or consistent self-recognition in any examined LM. Instead, our results suggest that given a set of alternatives, LMs seek to pick the "best" answer, regardless of its origin. Moreover, we find indications that preferences about which models produce the best answers are consistent across LMs. We additionally uncover novel insights on position bias considerations for LMs in multiple-choice settings. | [
"['Tim R. Davidson' 'Viacheslav Surkov' 'Veniamin Veselovsky'\n 'Giuseppe Russo' 'Robert West' 'Caglar Gulcehre']"
] |
null | null | 2407.06979 | null | null | http://arxiv.org/pdf/2407.06979v1 | 2024-07-09T15:54:06Z | 2024-07-09T15:54:06Z | Can virtual staining for high-throughput screening generalize? | The large volume and variety of imaging data from high-throughput screening (HTS) in the pharmaceutical industry present an excellent resource for training virtual staining models. However, the potential of models trained under one set of experimental conditions to generalize to other conditions remains underexplored. This study systematically investigates whether data from three cell types (lung, ovarian, and breast) and two phenotypes (toxic and non-toxic conditions) commonly found in HTS can effectively train virtual staining models to generalize across three typical HTS distribution shifts: unseen phenotypes, unseen cell types, and the combination of both. Utilizing a dataset of 772,416 paired bright-field, cytoplasm, nuclei, and DNA-damage stain images, we evaluate the generalization capabilities of models across pixel-based, instance-wise, and biological-feature-based levels. Our findings indicate that training virtual nuclei and cytoplasm models on non-toxic condition samples not only generalizes to toxic condition samples but leads to improved performance across all evaluation levels compared to training on toxic condition samples. Generalization to unseen cell types shows variability depending on the cell type; models trained on ovarian or lung cell samples often perform well under other conditions, while those trained on breast cell samples consistently show poor generalization. Generalization to unseen cell types and phenotypes shows good generalization across all levels of evaluation compared to addressing unseen cell types alone. This study represents the first large-scale, data-centric analysis of the generalization capability of virtual staining models trained on diverse HTS datasets, providing valuable strategies for experimental training data generation. | [
"['Samuel Tonks' 'Cuong Nguyer' 'Steve Hood' 'Ryan Musso' 'Ceridwen Hopely'\n 'Steve Titus' 'Minh Doan' 'Iain Styles' 'Alexander Krull']"
] |
null | null | 2407.06992 | null | null | http://arxiv.org/pdf/2407.06992v1 | 2024-07-09T16:07:01Z | 2024-07-09T16:07:01Z | Robust Neural Information Retrieval: An Adversarial and
Out-of-distribution Perspective | Recent advances in neural information retrieval (IR) models have significantly enhanced their effectiveness over various IR tasks. The robustness of these models, essential for ensuring their reliability in practice, has also garnered significant attention. With a wide array of research on robust IR being proposed, we believe it is the opportune moment to consolidate the current status, glean insights from existing methodologies, and lay the groundwork for future development. We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance. With a focus on adversarial and OOD robustness, we dissect robustness solutions for dense retrieval models (DRMs) and neural ranking models (NRMs), respectively, recognizing them as pivotal components of the neural IR pipeline. We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models. To the best of our knowledge, this is the first comprehensive survey on the robustness of neural IR models, and we will also be giving our first tutorial presentation at SIGIR 2024 url{https://sigir2024-robust-information-retrieval.github.io}. Along with the organization of existing work, we introduce a Benchmark for robust IR (BestIR), a heterogeneous evaluation benchmark for robust neural information retrieval, which is publicly available at url{https://github.com/Davion-Liu/BestIR}. We hope that this study provides useful clues for future research on the robustness of IR models and helps to develop trustworthy search engines url{https://github.com/Davion-Liu/Awesome-Robustness-in-Information-Retrieval}. | [
"['Yu-An Liu' 'Ruqing Zhang' 'Jiafeng Guo' 'Maarten de Rijke' 'Yixing Fan'\n 'Xueqi Cheng']"
] |
null | null | 2407.06998 | null | null | http://arxiv.org/pdf/2407.06998v1 | 2024-07-09T16:12:44Z | 2024-07-09T16:12:44Z | Changepoint Detection in Highly-Attributed Dynamic Graphs | Detecting anomalous behavior in dynamic networks remains a constant challenge. This problem is further exacerbated when the underlying topology of these networks is affected by individual highly-dimensional node attributes. We address this issue by tracking a network's modularity as a proxy of its community structure. We leverage Graph Neural Networks (GNNs) to estimate each snapshot's modularity. GNNs can account for both network structure and high-dimensional node attributes, providing a comprehensive approach for estimating network statistics. Our method is validated through simulations that demonstrate its ability to detect changes in highly-attributed networks by analyzing shifts in modularity. Moreover, we find our method is able to detect a real-world event within the #Iran Twitter reply network, where each node has high-dimensional textual attributes. | [
"['Emiliano Penaloza' 'Nathaniel Stevens']"
] |
null | null | 2407.07000 | null | null | http://arxiv.org/pdf/2407.07000v1 | 2024-07-09T16:13:26Z | 2024-07-09T16:13:26Z | Metron: Holistic Performance Evaluation Framework for LLM Inference
Systems | Serving large language models (LLMs) in production can incur substantial costs, which has prompted recent advances in inference system optimizations. Today, these systems are evaluated against conventional latency and throughput metrics (eg. TTFT, TBT, Normalised Latency and TPOT). However, these metrics fail to fully capture the nuances of LLM inference, leading to an incomplete assessment of user-facing performance crucial for real-time applications such as chat and translation. In this paper, we first identify the pitfalls of current performance metrics in evaluating LLM inference systems. We then propose Metron, a comprehensive performance evaluation framework that includes fluidity-index -- a novel metric designed to reflect the intricacies of the LLM inference process and its impact on real-time user experience. Finally, we evaluate various existing open-source platforms and model-as-a-service offerings using Metron, discussing their strengths and weaknesses. Metron is available at https://github.com/project-metron/metron. | [
"['Amey Agrawal' 'Anmol Agarwal' 'Nitin Kedia' 'Jayashree Mohan'\n 'Souvik Kundu' 'Nipun Kwatra' 'Ramachandran Ramjee' 'Alexey Tumanov']"
] |
null | null | 2407.07004 | null | null | http://arxiv.org/pdf/2407.07004v1 | 2024-07-09T16:17:16Z | 2024-07-09T16:17:16Z | Empirical analysis of Biding Precedent efficiency in the Brazilian
Supreme Court via Similar Case Retrieval | Binding precedents (S'umulas Vinculantes) constitute a juridical instrument unique to the Brazilian legal system and whose objectives include the protection of the Federal Supreme Court against repetitive demands. Studies of the effectiveness of these instruments in decreasing the Court's exposure to similar cases, however, indicate that they tend to fail in such a direction, with some of the binding precedents seemingly creating new demands. We empirically assess the legal impact of five binding precedents, 11, 14, 17, 26 and 37, at the highest court level through their effects on the legal subjects they address. This analysis is only possible through the comparison of the Court's ruling about the precedents' themes before they are created, which means that these decisions should be detected through techniques of Similar Case Retrieval. The contributions of this article are therefore twofold: on the mathematical side, we compare the uses of different methods of Natural Language Processing -- TF-IDF, LSTM, BERT, and regex -- for Similar Case Retrieval, whereas on the legal side, we contrast the inefficiency of these binding precedents with a set of hypotheses that may justify their repeated usage. We observe that the deep learning models performed significantly worse in the specific Similar Case Retrieval task and that the reasons for binding precedents to fail in responding to repetitive demand are heterogeneous and case-dependent, making it impossible to single out a specific cause. | [
"['Raphaël Tinarrage' 'Henrique Ennes' 'Lucas E. Resck' 'Lucas T. Gomes'\n 'Jean R. Ponciano' 'Jorge Poco']"
] |
null | null | 2407.07018 | null | null | http://arxiv.org/pdf/2407.07018v1 | 2024-07-09T16:38:48Z | 2024-07-09T16:38:48Z | End-To-End Causal Effect Estimation from Unstructured Natural Language
Data | Knowing the effect of an intervention is critical for human decision-making, but current approaches for causal effect estimation rely on manual data collection and structuring, regardless of the causal assumptions. This increases both the cost and time-to-completion for studies. We show how large, diverse observational text data can be mined with large language models (LLMs) to produce inexpensive causal effect estimates under appropriate causal assumptions. We introduce NATURAL, a novel family of causal effect estimators built with LLMs that operate over datasets of unstructured text. Our estimators use LLM conditional distributions (over variables of interest, given the text data) to assist in the computation of classical estimators of causal effect. We overcome a number of technical challenges to realize this idea, such as automating data curation and using LLMs to impute missing information. We prepare six (two synthetic and four real) observational datasets, paired with corresponding ground truth in the form of randomized trials, which we used to systematically evaluate each step of our pipeline. NATURAL estimators demonstrate remarkable performance, yielding causal effect estimates that fall within 3 percentage points of their ground truth counterparts, including on real-world Phase 3/4 clinical trials. Our results suggest that unstructured text data is a rich source of causal effect information, and NATURAL is a first step towards an automated pipeline to tap this resource. | [
"['Nikita Dhawan' 'Leonardo Cotta' 'Karen Ullrich' 'Rahul G. Krishnan'\n 'Chris J. Maddison']"
] |
null | null | 2407.07054 | null | null | http://arxiv.org/pdf/2407.07054v1 | 2024-07-09T17:20:49Z | 2024-07-09T17:20:49Z | A Differentially Private Blockchain-Based Approach for Vertical
Federated Learning | We present the Differentially Private Blockchain-Based Vertical Federal Learning (DP-BBVFL) algorithm that provides verifiability and privacy guarantees for decentralized applications. DP-BBVFL uses a smart contract to aggregate the feature representations, i.e., the embeddings, from clients transparently. We apply local differential privacy to provide privacy for embeddings stored on a blockchain, hence protecting the original data. We provide the first prototype application of differential privacy with blockchain for vertical federated learning. Our experiments with medical data show that DP-BBVFL achieves high accuracy with a tradeoff in training time due to on-chain aggregation. This innovative fusion of differential privacy and blockchain technology in DP-BBVFL could herald a new era of collaborative and trustworthy machine learning applications across several decentralized application domains. | [
"['Linh Tran' 'Sanjay Chari' 'Md. Saikat Islam Khan' 'Aaron Zachariah'\n 'Stacy Patterson' 'Oshani Seneviratne']"
] |
null | null | 2407.07055 | null | null | http://arxiv.org/pdf/2407.07055v1 | 2024-07-09T17:21:49Z | 2024-07-09T17:21:49Z | Multicell-Fold: geometric learning in folding multicellular life | During developmental processes such as embryogenesis, how a group of cells fold into specific structures, is a central question in biology that defines how living organisms form. Establishing tissue-level morphology critically relies on how every single cell decides to position itself relative to its neighboring cells. Despite its importance, it remains a major challenge to understand and predict the behavior of every cell within the living tissue over time during such intricate processes. To tackle this question, we propose a geometric deep learning model that can predict multicellular folding and embryogenesis, accurately capturing the highly convoluted spatial interactions among cells. We demonstrate that multicellular data can be represented with both granular and foam-like physical pictures through a unified graph data structure, considering both cellular interactions and cell junction networks. We successfully use our model to achieve two important tasks, interpretable 4-D morphological sequence alignment, and predicting local cell rearrangements before they occur at single-cell resolution. Furthermore, using an activation map and ablation studies, we demonstrate that cell geometries and cell junction networks together regulate local cell rearrangement which is critical for embryo morphogenesis. This approach provides a novel paradigm to study morphogenesis, highlighting a unified data structure and harnessing the power of geometric deep learning to accurately model the mechanisms and behaviors of cells during development. It offers a pathway toward creating a unified dynamic morphological atlas for a variety of developmental processes such as embryogenesis. | [
"['Haiqian Yang' 'Anh Q. Nguyen' 'Dapeng Bi' 'Markus J. Buehler' 'Ming Guo']"
] |
null | null | 2407.07059 | null | null | http://arxiv.org/pdf/2407.07059v1 | 2024-07-09T17:31:47Z | 2024-07-09T17:31:47Z | Differentiable Optimization of Similarity Scores Between Models and
Brains | What metrics should guide the development of more realistic models of the brain? One proposal is to quantify the similarity between models and brains using methods such as linear regression, Centered Kernel Alignment (CKA), and angular Procrustes distance. To better understand the limitations of these similarity measures we analyze neural activity recorded in five experiments on nonhuman primates, and optimize synthetic datasets to become more similar to these neural recordings. How similar can these synthetic datasets be to neural activity while failing to encode task relevant variables? We find that some measures like linear regression and CKA, differ from angular Procrustes, and yield high similarity scores even when task relevant variables cannot be linearly decoded from the synthetic datasets. Synthetic datasets optimized to maximize similarity scores initially learn the first principal component of the target dataset, but angular Procrustes captures higher variance dimensions much earlier than methods like linear regression and CKA. We show in both theory and simulations how these scores change when different principal components are perturbed. And finally, we jointly optimize multiple similarity scores to find their allowed ranges, and show that a high angular Procrustes similarity, for example, implies a high CKA score, but not the converse. | [
"['Nathan Cloos' 'Moufan Li' 'Markus Siegel' 'Scott L. Brincat'\n 'Earl K. Miller' 'Guangyu Robert Yang' 'Christopher J. Cueva']"
] |
null | null | 2407.07064 | null | null | http://arxiv.org/pdf/2407.07064v1 | 2024-07-09T17:38:03Z | 2024-07-09T17:38:03Z | Prompting Techniques for Secure Code Generation: A Systematic
Investigation | Large Language Models (LLMs) are gaining momentum in software development with prompt-driven programming enabling developers to create code from natural language (NL) instructions. However, studies have questioned their ability to produce secure code and, thereby, the quality of prompt-generated software. Alongside, various prompting techniques that carefully tailor prompts have emerged to elicit optimal responses from LLMs. Still, the interplay between such prompting strategies and secure code generation remains under-explored and calls for further investigations. OBJECTIVE: In this study, we investigate the impact of different prompting techniques on the security of code generated from NL instructions by LLMs. METHOD: First we perform a systematic literature review to identify the existing prompting techniques that can be used for code generation tasks. A subset of these techniques are evaluated on GPT-3, GPT-3.5, and GPT-4 models for secure code generation. For this, we used an existing dataset consisting of 150 NL security-relevant code-generation prompts. RESULTS: Our work (i) classifies potential prompting techniques for code generation (ii) adapts and evaluates a subset of the identified techniques for secure code generation tasks and (iii) observes a reduction in security weaknesses across the tested LLMs, especially after using an existing technique called Recursive Criticism and Improvement (RCI), contributing valuable insights to the ongoing discourse on LLM-generated code security. | [
"['Catherine Tony' 'Nicolás E. Díaz Ferreyra' 'Markus Mutas' 'Salem Dhiff'\n 'Riccardo Scandariato']"
] |
null | null | 2407.07066 | null | null | http://arxiv.org/pdf/2407.07066v2 | 2024-07-10T01:37:05Z | 2024-07-09T17:42:26Z | Explainable Hyperdimensional Computing for Balancing Privacy and
Transparency in Additive Manufacturing Monitoring | In-situ sensing, in conjunction with learning models, presents a unique opportunity to address persistent defect issues in Additive Manufacturing (AM) processes. However, this integration introduces significant data privacy concerns, such as data leakage, sensor data compromise, and model inversion attacks, revealing critical details about part design, material composition, and machine parameters. Differential Privacy (DP) models, which inject noise into data under mathematical guarantees, offer a nuanced balance between data utility and privacy by obscuring traces of sensing data. However, the introduction of noise into learning models, often functioning as black boxes, complicates the prediction of how specific noise levels impact model accuracy. This study introduces the Differential Privacy-HyperDimensional computing (DP-HD) framework, leveraging the explainability of the vector symbolic paradigm to predict the noise impact on the accuracy of in-situ monitoring, safeguarding sensitive data while maintaining operational efficiency. Experimental results on real-world high-speed melt pool data of AM for detecting overhang anomalies demonstrate that DP-HD achieves superior operational efficiency, prediction accuracy, and robust privacy protection, outperforming state-of-the-art Machine Learning (ML) models. For example, when implementing the same level of privacy protection (with a privacy budget set at 1), our model achieved an accuracy of 94.43%, surpassing the performance of traditional models such as ResNet50 (52.30%), GoogLeNet (23.85%), AlexNet (55.78%), DenseNet201 (69.13%), and EfficientNet B2 (40.81%). Notably, DP-HD maintains high performance under substantial noise additions designed to enhance privacy, unlike current models that suffer significant accuracy declines under high privacy constraints. | [
"['Fardin Jalil Piran' 'Prathyush P. Poduval' 'Hamza Errahmouni Barkam'\n 'Mohsen Imani' 'Farhad Imani']"
] |
null | null | 2407.07071 | null | null | http://arxiv.org/pdf/2407.07071v1 | 2024-07-09T17:44:34Z | 2024-07-09T17:44:34Z | Lookback Lens: Detecting and Mitigating Contextual Hallucinations in
Large Language Models Using Only Attention Maps | When asked to summarize articles or answer questions given a passage, large language models (LLMs) can hallucinate details and respond with unsubstantiated answers that are inaccurate with respect to the input context. This paper describes a simple approach for detecting such contextual hallucinations. We hypothesize that contextual hallucinations are related to the extent to which an LLM attends to information in the provided context versus its own generations. Based on this intuition, we propose a simple hallucination detection model whose input features are given by the ratio of attention weights on the context versus newly generated tokens (for each attention head). We find that a linear classifier based on these lookback ratio features is as effective as a richer detector that utilizes the entire hidden states of an LLM or a text-based entailment model. The lookback ratio-based detector -- Lookback Lens -- is found to transfer across tasks and even models, allowing a detector that is trained on a 7B model to be applied (without retraining) to a larger 13B model. We further apply this detector to mitigate contextual hallucinations, and find that a simple classifier-guided decoding approach is able to reduce the amount of hallucination, for example by 9.6% in the XSum summarization task. | [
"['Yung-Sung Chuang' 'Linlu Qiu' 'Cheng-Yu Hsieh' 'Ranjay Krishna'\n 'Yoon Kim' 'James Glass']"
] |
null | null | 2407.07082 | null | null | http://arxiv.org/pdf/2407.07082v1 | 2024-07-09T17:55:23Z | 2024-07-09T17:55:23Z | Can Learned Optimization Make Reinforcement Learning Less Difficult? | While reinforcement learning (RL) holds great potential for decision making in the real world, it suffers from a number of unique difficulties which often need specific consideration. In particular: it is highly non-stationary; suffers from high degrees of plasticity loss; and requires exploration to prevent premature convergence to local optima and maximize return. In this paper, we consider whether learned optimization can help overcome these problems. Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed solutions to these difficulties. We show that our parameterization is flexible enough to enable meta-learning in diverse learning contexts, including the ability to use stochasticity for exploration. Our experiments demonstrate that when meta-trained on single and small sets of environments, OPEN outperforms or equals traditionally used optimizers. Furthermore, OPEN shows strong generalization across a distribution of environments and a range of agent architectures. | [
"['Alexander David Goldie' 'Chris Lu' 'Matthew Thomas Jackson'\n 'Shimon Whiteson' 'Jakob Nicolaus Foerster']"
] |
null | null | 2407.07084 | null | null | http://arxiv.org/pdf/2407.07084v1 | 2024-07-09T17:56:29Z | 2024-07-09T17:56:29Z | Stabilized Proximal-Point Methods for Federated Optimization | In developing efficient optimization algorithms, it is crucial to account for communication constraints -- a significant challenge in modern federated learning settings. The best-known communication complexity among non-accelerated algorithms is achieved by DANE, a distributed proximal-point algorithm that solves local subproblems in each iteration and that can exploit second-order similarity among individual functions. However, to achieve such communication efficiency, the accuracy requirement for solving the local subproblems is slightly sub-optimal. Inspired by the hybrid projection-proximal point method, in this work, we i) propose a novel distributed algorithm S-DANE. This method adopts a more stabilized prox-center in the proximal step compared with DANE, and matches its deterministic communication complexity. Moreover, the accuracy condition of the subproblem is milder, leading to enhanced local computation efficiency. Furthermore, it supports partial client participation and arbitrary stochastic local solvers, making it more attractive in practice. We further ii) accelerate S-DANE, and show that the resulting algorithm achieves the best-known communication complexity among all existing methods for distributed convex optimization, with the same improved local computation efficiency as S-DANE. | [
"['Xiaowen Jiang' 'Anton Rodomanov' 'Sebastian U. Stich']"
] |
null | null | 2407.07087 | null | null | http://arxiv.org/pdf/2407.07087v1 | 2024-07-09T17:58:18Z | 2024-07-09T17:58:18Z | CopyBench: Measuring Literal and Non-Literal Reproduction of
Copyright-Protected Text in Language Model Generation | Evaluating the degree of reproduction of copyright-protected content by language models (LMs) is of significant interest to the AI and legal communities. Although both literal and non-literal similarities are considered by courts when assessing the degree of reproduction, prior research has focused only on literal similarities. To bridge this gap, we introduce CopyBench, a benchmark designed to measure both literal and non-literal copying in LM generations. Using copyrighted fiction books as text sources, we provide automatic evaluation protocols to assess literal and non-literal copying, balanced against the model utility in terms of the ability to recall facts from the copyrighted works and generate fluent completions. We find that, although literal copying is relatively rare, two types of non-literal copying -- event copying and character copying -- occur even in models as small as 7B parameters. Larger models demonstrate significantly more copying, with literal copying rates increasing from 0.2% to 10.5% and non-literal copying from 2.3% to 6.9% when comparing Llama3-8B and 70B models, respectively. We further evaluate the effectiveness of current strategies for mitigating copying and show that (1) training-time alignment can reduce literal copying but may increase non-literal copying, and (2) current inference-time mitigation methods primarily reduce literal but not non-literal copying. | [
"['Tong Chen' 'Akari Asai' 'Niloofar Mireshghallah' 'Sewon Min'\n 'James Grimmelmann' 'Yejin Choi' 'Hannaneh Hajishirzi' 'Luke Zettlemoyer'\n 'Pang Wei Koh']"
] |
null | null | 2407.07089 | null | null | http://arxiv.org/pdf/2407.07089v1 | 2024-07-09T17:59:17Z | 2024-07-09T17:59:17Z | Fine-Tuning Linear Layers Only Is a Simple yet Effective Way for Task
Arithmetic | Task arithmetic has recently emerged as a cost-effective and scalable approach to edit pre-trained models directly in weight space, by adding the fine-tuned weights of different tasks. The performance has been further improved by a linear property which is illustrated by weight disentanglement. Yet, conventional linearization methods (e.g., NTK linearization) not only double the time and training cost but also have a disadvantage on single-task performance. We propose a simple yet effective and efficient method that only fine-tunes linear layers, which improves weight disentanglement and efficiency simultaneously. Specifically, our study reveals that only fine-tuning the linear layers in the attention modules makes the whole model occur in a linear regime, significantly improving weight disentanglement. To further understand how our method improves the disentanglement of task arithmetic, we present a comprehensive study of task arithmetic by differentiating the role of representation model and task-specific model. In particular, we find that the representation model plays an important role in improving weight disentanglement whereas the task-specific models such as the classification heads can degenerate the weight disentanglement performance. Overall, our work uncovers novel insights into the fundamental mechanisms of task arithmetic and offers a more reliable and effective approach to editing pre-trained models. | [
"['Ruochen Jin' 'Bojian Hou' 'Jiancong Xiao' 'Weijie Su' 'Li Shen']"
] |
null | null | 2407.07093 | null | null | http://arxiv.org/pdf/2407.07093v1 | 2024-07-09T17:59:48Z | 2024-07-09T17:59:48Z | FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive
Distillation | This work presents a Fully BInarized Large Language Model (FBI-LLM), demonstrating for the first time how to train a large-scale binary language model from scratch (not the partial binary or ternary LLM like BitNet b1.58) to match the performance of its full-precision counterparts (e.g., FP16 or BF16) in transformer-based LLMs. It achieves this by employing an autoregressive distillation (AD) loss with maintaining equivalent model dimensions (130M, 1.3B, 7B) and training data volume as regular LLM pretraining, while delivering competitive results in terms of perplexity and task-specific effectiveness. Intriguingly, by analyzing the training trajectory, we find that the pretrained weight is not necessary for training binarized LLMs from scratch. This research encourages a new computational framework and may facilitate the future design of specialized hardware tailored for fully 1-bit LLMs. We make all models, code, and training dataset fully accessible and transparent to support further research (Code: https://github.com/LiqunMa/FBI-LLM. Model: https://huggingface.co/LiqunMa/). | [
"['Liqun Ma' 'Mingjie Sun' 'Zhiqiang Shen']"
] |
null | null | 2407.07096 | null | null | http://arxiv.org/pdf/2407.07096v1 | 2024-06-06T15:32:37Z | 2024-06-06T15:32:37Z | Spectral Toolkit of Algorithms for Graphs: Technical Report (2) | Spectral Toolkit of Algorithms for Graphs (STAG) is an open-source library for efficient graph algorithms. This technical report presents the newly implemented component on locality sensitive hashing, kernel density estimation, and fast spectral clustering. The report includes a user's guide to the newly implemented algorithms, experiments and demonstrations of the new functionality, and several technical considerations behind our development. | [
"['Peter Macgregor' 'He Sun']"
] |
null | null | 2407.07099 | null | null | http://arxiv.org/pdf/2407.07099v1 | 2024-06-18T07:46:13Z | 2024-06-18T07:46:13Z | Nash CoT: Multi-Path Inference with Preference Equilibrium | Chain-of-thought (CoT) prompting has emerged as a powerful technique for enhancing the reasoning capabilities of Large Language Models (LLMs) on complex problems. Among CoT-related studies, self-consistency (Multi-path inference with answer filtering through voting) involves generating multiple reasoning paths using the CoT framework and then selecting the most frequently produced outputs standing out as a concise yet competitive approach. While self-consistency has indeed led to the improvements in LLM inference, the use of multi-path inference also escalates deployment costs. Therefore, maintaining the performance benefits of self-consistency inherited from multi-path inference while reducing the inference costs holds significant value. In this research, we conceptualize language decoding as a preference consensus game, constructing a bi-player gaming system within each local path, and introduce Nash Chain-of-Thought (Nash CoT). Specifically, for a given question, we leverage LLM to autonomously select the contextually relevant template and generate outputs guided by this template, aiming to reach Nash Equilibrium alongside normal generation in each path. This approach allows us to achieve comparable or improved performance compared to self-consistency while using fewer inference paths on various inference tasks, including Arabic reasoning, Commonsense Question answering, and Symbolic inference. | [
"['Ziqi Zhang' 'Cunxiang Wang' 'Xiong Xiao' 'Yue Zhang' 'Donglin Wang']"
] |
null | null | 2407.07110 | null | null | http://arxiv.org/pdf/2407.07110v1 | 2024-06-26T02:24:13Z | 2024-06-26T02:24:13Z | Foundation Models for Electrocardiograms | Foundation models, enhanced by self-supervised learning (SSL) techniques, represent a cutting-edge frontier in biomedical signal analysis, particularly for electrocardiograms (ECGs), crucial for cardiac health monitoring and diagnosis. This study conducts a comprehensive analysis of foundation models for ECGs by employing and refining innovative SSL methodologies - namely, generative and contrastive learning - on a vast dataset of over 1.1 million ECG samples. By customizing these methods to align with the intricate characteristics of ECG signals, our research has successfully developed foundation models that significantly elevate the precision and reliability of cardiac diagnostics. These models are adept at representing the complex, subtle nuances of ECG data, thus markedly enhancing diagnostic capabilities. The results underscore the substantial potential of SSL-enhanced foundation models in clinical settings and pave the way for extensive future investigations into their scalable applications across a broader spectrum of medical diagnostics. This work sets a benchmark in the ECG field, demonstrating the profound impact of tailored, data-driven model training on the efficacy and accuracy of medical diagnostics. | [
"['Junho Song' 'Jong-Hwan Jang' 'Byeong Tak Lee' 'DongGyun Hong'\n 'Joon-myoung Kwon' 'Yong-Yeon Jo']"
] |
null | null | 2407.07111 | null | null | http://arxiv.org/pdf/2407.07111v1 | 2024-06-26T04:58:39Z | 2024-06-26T04:58:39Z | Diffusion Model-Based Video Editing: A Survey | The rapid development of diffusion models (DMs) has significantly advanced image and video applications, making "what you want is what you see" a reality. Among these, video editing has gained substantial attention and seen a swift rise in research activity, necessitating a comprehensive and systematic review of the existing literature. This paper reviews diffusion model-based video editing techniques, including theoretical foundations and practical applications. We begin by overviewing the mathematical formulation and image domain's key methods. Subsequently, we categorize video editing approaches by the inherent connections of their core technologies, depicting evolutionary trajectory. This paper also dives into novel applications, including point-based editing and pose-guided human video editing. Additionally, we present a comprehensive comparison using our newly introduced V2VBench. Building on the progress achieved to date, the paper concludes with ongoing challenges and potential directions for future research. | [
"['Wenhao Sun' 'Rong-Cheng Tu' 'Jingyi Liao' 'Dacheng Tao']"
] |