categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2407.07896
null
null
http://arxiv.org/pdf/2407.07896v1
2024-07-10T17:59:55Z
2024-07-10T17:59:55Z
Pentagonal Photonic Crystal Mirrors: Scalable Lightsails with Enhanced Acceleration via Neural Topology Optimization
The Starshot Breakthrough Initiative aims to send one-gram microchip probes to Alpha Centauri within 20 years, using gram-scale lightsails propelled by laser-based radiation pressure, reaching velocities nearing a fifth of light speed. This mission requires lightsail materials that challenge the fundamentals of nanotechnology, requiring innovations in optics, material science and structural engineering. Unlike the microchip payload, which must be minimized in every dimension, such lightsails need meter-scale dimensions with nanoscale thickness and billions of nanoscale holes to enhance reflectivity and reduce mass. Our study employs neural topology optimization, revealing a novel pentagonal lattice-based photonic crystal (PhC) reflector. The optimized designs shorten acceleration times, therefore lowering launch costs significantly. Crucially, these designs also enable lightsail material fabrication with orders-of-magnitude reduction in costs. We have fabricated a 60 x 60 mm$^2$, 200nm thick, single-layer reflector perforated with over a billion nanoscale features; the highest aspect-ratio nanophotonic element to date. We achieve this with nearly 9,000 times cost reduction per m$^2$. Starshot lightsails will have several stringent requirements but will ultimately be driven by costs to build at scale. Here we highlight challenges and possible solutions in developing lightsail materials - showcasing the potential of scaling nanophotonics for cost-effective next-generation space exploration.
[ "['L. Norder' 'S. Yin' 'M. J. de Jong' 'F. Stallone' 'H. Aydogmus'\n 'P. M. Sberna' 'M. A. Bessa' 'R. A. Norte']" ]
null
null
2407.07912
null
null
http://arxiv.org/pdf/2407.07912v1
2024-07-03T13:35:23Z
2024-07-03T13:35:23Z
ITEM: Improving Training and Evaluation of Message-Passing based GNNs for top-k recommendation
Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.
[ "['Yannis Karmim' 'Elias Ramzi' \"Raphaël Fournier-S'niehotta\"\n 'Nicolas Thome']" ]
null
null
2407.07916
null
null
http://arxiv.org/pdf/2407.07916v1
2024-07-05T20:35:57Z
2024-07-05T20:35:57Z
Benchmarking GNNs Using Lightning Network Data
The Bitcoin Lightning Network is a layer 2 protocol designed to facilitate fast and inexpensive Bitcoin transactions. It operates by establishing channels between users, where Bitcoin is locked and transactions are conducted off-chain until the channels are closed, with only the initial and final transactions recorded on the blockchain. Routing transactions through intermediary nodes is crucial for users without direct channels, allowing these routing nodes to collect fees for their services. Nodes announce their channels to the network, forming a graph with channels as edges. In this paper, we analyze the graph structure of the Lightning Network and investigate the statistical relationships between node properties using machine learning, particularly Graph Neural Networks (GNNs). We formulate a series of tasks to explore these relationships and provide benchmarks for GNN architectures, demonstrating how topological and neighbor information enhances performance. Our evaluation of several models reveals the effectiveness of GNNs in these tasks and highlights the insights gained from their application.
[ "['Rainer Feichtinger' 'Florian Grötschla' 'Lioba Heimbach'\n 'Roger Wattenhofer']" ]
null
null
2407.07917
null
null
http://arxiv.org/pdf/2407.07917v1
2024-07-05T22:03:13Z
2024-07-05T22:03:13Z
Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape
Despite the promise of Federated Learning (FL) for privacy-preserving model training on distributed data, it remains susceptible to backdoor attacks. These attacks manipulate models by embedding triggers (specific input patterns) in the training data, forcing misclassification as predefined classes during deployment. Traditional single-trigger attacks and recent work on cooperative multiple-trigger attacks, where clients collaborate, highlight limitations in attack realism due to coordination requirements. We investigate a more alarming scenario: non-cooperative multiple-trigger attacks. Here, independent adversaries introduce distinct triggers targeting unique classes. These parallel attacks exploit FL's decentralized nature, making detection difficult. Our experiments demonstrate the alarming vulnerability of FL to such attacks, where individual backdoors can be successfully learned without impacting the main task. This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape. While our focus is on empirical analysis, we believe it can guide backdoor research toward more realistic settings, highlighting the crucial role of FL in building robust defenses against diverse backdoor threats. The code is available at url{https://anonymous.4open.science/r/nba-980F/}.
[ "['Tuan Nguyen' 'Dung Thuy Nguyen' 'Khoa D Doan' 'Kok-Seng Wong']" ]
null
null
2407.07918
null
null
http://arxiv.org/pdf/2407.07918v1
2024-07-07T12:41:40Z
2024-07-07T12:41:40Z
Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach
Machine learning has been successfully applied in developing malware detection systems, with a primary focus on accuracy, and increasing attention to reducing computational overhead and improving model interpretability. However, an important question remains underexplored: How well can machine learning-based models detect entirely new forms of malware not present in the training data? In this study, we present a machine learning-based system for detecting obfuscated malware that is not only highly accurate, lightweight and interpretable, but also capable of successfully adapting to new types of malware attacks. Our system is capable of detecting 15 malware subtypes despite being exclusively trained on one malware subtype, namely the Transponder from the Spyware family. This system was built after training 15 distinct random forest-based models, each on a different malware subtype from the CIC-MalMem-2022 dataset. These models were evaluated against the entire range of malware subtypes, including all unseen malware subtypes. To maintain the system's streamlined nature, training was confined to the top five most important features, which also enhanced interpretability. The Transponder-focused model exhibited high accuracy, exceeding 99.8%, with an average processing speed of 5.7 microseconds per file. We also illustrate how the Shapley additive explanations technique can facilitate the interpretation of the model predictions. Our research contributes to advancing malware detection methodologies, pioneering the feasibility of detecting obfuscated malware by exclusively training a model on a single or a few carefully selected malware subtypes and applying it to detect unseen subtypes.
[ "['Oladipo A. Madamidola' 'Felix Ngobigha' 'Adnane Ez-zizi']" ]
null
null
2407.07921
null
null
http://arxiv.org/pdf/2407.07921v1
2024-07-08T04:14:19Z
2024-07-08T04:14:19Z
A Trustworthy AIoT-enabled Localization System via Federated Learning and Blockchain
There is a significant demand for indoor localization technology in smart buildings, and the most promising solution in this field is using RF sensors and fingerprinting-based methods that employ machine learning models trained on crowd-sourced user data gathered from IoT devices. However, this raises security and privacy issues in practice. Some researchers propose to use federated learning to partially overcome privacy problems, but there still remain security concerns, e.g., single-point failure and malicious attacks. In this paper, we propose a framework named DFLoc to achieve precise 3D localization tasks while considering the following two security concerns. Particularly, we design a specialized blockchain to decentralize the framework by distributing the tasks such as model distribution and aggregation which are handled by a central server to all clients in most previous works, to address the issue of the single-point failure for a reliable and accurate indoor localization system. Moreover, we introduce an updated model verification mechanism within the blockchain to alleviate the concern of malicious node attacks. Experimental results substantiate the framework's capacity to deliver accurate 3D location predictions and its superior resistance to the impacts of single-point failure and malicious attacks when compared to conventional centralized federated learning systems.
[ "['Junfei Wang' 'He Huang' 'Jingze Feng' 'Steven Wong' 'Lihua Xie'\n 'Jianfei Yang']" ]
null
null
2407.07922
null
null
http://arxiv.org/pdf/2407.07922v1
2024-07-08T11:51:15Z
2024-07-08T11:51:15Z
Vulnerability Detection in Smart Contracts: A Comprehensive Survey
In the growing field of blockchain technology, smart contracts exist as transformative digital agreements that execute transactions autonomously in decentralised networks. However, these contracts face challenges in the form of security vulnerabilities, posing significant financial and operational risks. While traditional methods to detect and mitigate vulnerabilities in smart contracts are limited due to a lack of comprehensiveness and effectiveness, integrating advanced machine learning technologies presents an attractive approach to increasing effective vulnerability countermeasures. We endeavour to fill an important gap in the existing literature by conducting a rigorous systematic review, exploring the intersection between machine learning and smart contracts. Specifically, the study examines the potential of machine learning techniques to improve the detection and mitigation of vulnerabilities in smart contracts. We analysed 88 articles published between 2018 and 2023 from the following databases: IEEE, ACM, ScienceDirect, Scopus, and Google Scholar. The findings reveal that classical machine learning techniques, including KNN, RF, DT, XG-Boost, and SVM, outperform static tools in vulnerability detection. Moreover, multi-model approaches integrating deep learning and classical machine learning show significant improvements in precision and recall, while hybrid models employing various techniques achieve near-perfect performance in vulnerability detection accuracy. By integrating state-of-the-art solutions, this work synthesises current methods, thoroughly investigates research gaps, and suggests directions for future studies. The insights gathered from this study are intended to serve as a seminal reference for academics, industry experts, and bodies interested in leveraging machine learning to enhance smart contract security.
[ "['Christopher De Baets' 'Basem Suleiman' 'Armin Chitizadeh' 'Imran Razzak']" ]
null
null
2407.07924
null
null
http://arxiv.org/pdf/2407.07924v1
2024-07-09T07:11:10Z
2024-07-09T07:11:10Z
Solving General Natural-Language-Description Optimization Problems with Large Language Models
Optimization problems seek to find the best solution to an objective under a set of constraints, and have been widely investigated in real-world applications. Modeling and solving optimization problems in a specific domain typically require a combination of domain knowledge, mathematical skills, and programming ability, making it difficult for general users and even domain professionals. In this paper, we propose a novel framework called OptLLM that augments LLMs with external solvers. Specifically, OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results for decision-making. In addition, OptLLM supports multi-round dialogues to gradually refine the modeling and solving of optimization problems. To illustrate the effectiveness of OptLLM, we provide tutorials on three typical optimization applications and conduct experiments on both prompt-based GPT models and a fine-tuned Qwen model using a large-scale selfdeveloped optimization dataset. Experimental results show that OptLLM works with various LLMs, and the fine-tuned model achieves an accuracy boost compared to the promptbased models. Some features of OptLLM framework have been available for trial since June 2023 (https://opt.alibabacloud.com/chat or https://opt.aliyun.com/chat).
[ "['Jihai Zhang' 'Wei Wang' 'Siyan Guo' 'Li Wang' 'Fangquan Lin'\n 'Cheng Yang' 'Wotao Yin']" ]
null
null
2407.07926
null
null
http://arxiv.org/pdf/2407.07926v1
2024-07-09T14:48:43Z
2024-07-09T14:48:43Z
Synthetic Data: Revisiting the Privacy-Utility Trade-off
Synthetic data has been considered a better privacy-preserving alternative to traditionally sanitized data across various applications. However, a recent article challenges this notion, stating that synthetic data does not provide a better trade-off between privacy and utility than traditional anonymization techniques, and that it leads to unpredictable utility loss and highly unpredictable privacy gain. The article also claims to have identified a breach in the differential privacy guarantees provided by PATEGAN and PrivBayes. When a study claims to refute or invalidate prior findings, it is crucial to verify and validate the study. In our work, we analyzed the implementation of the privacy game described in the article and found that it operated in a highly specialized and constrained environment, which limits the applicability of its findings to general cases. Our exploration also revealed that the game did not satisfy a crucial precondition concerning data distributions, which contributed to the perceived violation of the differential privacy guarantees offered by PATEGAN and PrivBayes. We also conducted a privacy-utility trade-off analysis in a more general and unconstrained environment. Our experimentation demonstrated that synthetic data achieves a more favorable privacy-utility trade-off compared to the provided implementation of k-anonymization, thereby reaffirming earlier conclusions.
[ "['Fatima Jahan Sarmin' 'Atiquer Rahman Sarkar' 'Yang Wang'\n 'Noman Mohammed']" ]
null
null
2407.07930
null
null
http://arxiv.org/pdf/2407.07930v1
2024-07-10T07:22:15Z
2024-07-10T07:22:15Z
Token-Mol 1.0: Tokenized drug design with large language model
Significant interests have recently risen in leveraging sequence-based large language models (LLMs) for drug design. However, most current applications of LLMs in drug discovery lack the ability to comprehend three-dimensional (3D) structures, thereby limiting their effectiveness in tasks that explicitly involve molecular conformations. In this study, we introduced Token-Mol, a token-only 3D drug design model. This model encodes all molecular information, including 2D and 3D structures, as well as molecular property data, into tokens, which transforms classification and regression tasks in drug discovery into probabilistic prediction problems, thereby enabling learning through a unified paradigm. Token-Mol is built on the transformer decoder architecture and trained using random causal masking techniques. Additionally, we proposed the Gaussian cross-entropy (GCE) loss function to overcome the challenges in regression tasks, significantly enhancing the capacity of LLMs to learn continuous numerical values. Through a combination of fine-tuning and reinforcement learning (RL), Token-Mol achieves performance comparable to or surpassing existing task-specific methods across various downstream tasks, including pocket-based molecular generation, conformation generation, and molecular property prediction. Compared to existing molecular pre-trained models, Token-Mol exhibits superior proficiency in handling a wider range of downstream tasks essential for drug design. Notably, our approach improves regression task accuracy by approximately 30% compared to similar token-only methods. Token-Mol overcomes the precision limitations of token-only models and has the potential to integrate seamlessly with general models such as ChatGPT, paving the way for the development of a universal artificial intelligence drug design model that facilitates rapid and high-quality drug design by experts.
[ "['Jike Wang' 'Rui Qin' 'Mingyang Wang' 'Meijing Fang' 'Yangyang Zhang'\n 'Yuchen Zhu' 'Qun Su' 'Qiaolin Gou' 'Chao Shen' 'Odin Zhang'\n 'Zhenxing Wu' 'Dejun Jiang' 'Xujun Zhang' 'Huifeng Zhao' 'Xiaozhe Wan'\n 'Zhourui Wu' 'Liwei Liu' 'Yu Kang' 'Chang-Yu Hsieh' 'Tingjun Hou']" ]
null
null
2407.07931
null
null
http://arxiv.org/pdf/2407.07931v1
2024-07-10T07:22:30Z
2024-07-10T07:22:30Z
Search, Examine and Early-Termination: Fake News Detection with Annotation-Free Evidences
Pioneer researches recognize evidences as crucial elements in fake news detection apart from patterns. Existing evidence-aware methods either require laborious pre-processing procedures to assure relevant and high-quality evidence data, or incorporate the entire spectrum of available evidences in all news cases, regardless of the quality and quantity of the retrieved data. In this paper, we propose an approach named textbf{SEE} that retrieves useful information from web-searched annotation-free evidences with an early-termination mechanism. The proposed SEE is constructed by three main phases: textbf{S}earching online materials using the news as a query and directly using their titles as evidences without any annotating or filtering procedure, sequentially textbf{E}xamining the news alongside with each piece of evidence via attention mechanisms to produce new hidden states with retrieved information, and allowing textbf{E}arly-termination within the examining loop by assessing whether there is adequate confidence for producing a correct prediction. We have conducted extensive experiments on datasets with unprocessed evidences, i.e., Weibo21, GossipCop, and pre-processed evidences, namely Snopes and PolitiFact. The experimental results demonstrate that the proposed method outperforms state-of-the-art approaches.
[ "['Yuzhou Yang' 'Yangming Zhou' 'Qichao Ying' 'Zhenxing Qian'\n 'Xinpeng Zhang']" ]
null
null
2407.07933
null
null
http://arxiv.org/pdf/2407.07933v2
2024-07-12T15:15:58Z
2024-07-10T12:58:30Z
Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments
We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist. To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model. As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome). Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest. We theoretically demonstrate the correctness of the proposed algorithm. Experimental results show the effectiveness of our method for estimating causal effects in bi-directional MR.
[ "['Feng Xie' 'Zhen Yao' 'Lin Xie' 'Yan Zeng' 'Zhi Geng']" ]
null
null
2407.07972
null
null
http://arxiv.org/pdf/2407.07972v1
2024-07-10T18:11:40Z
2024-07-10T18:11:40Z
Deconstructing What Makes a Good Optimizer for Language Models
Training language models becomes increasingly expensive with scale, prompting numerous attempts to improve optimization efficiency. Despite these efforts, the Adam optimizer remains the most widely used, due to a prevailing view that it is the most effective approach. We aim to compare several optimization algorithms, including SGD, Adafactor, Adam, and Lion, in the context of autoregressive language modeling across a range of model sizes, hyperparameters, and architecture variants. Our findings indicate that, except for SGD, these algorithms all perform comparably both in their optimal performance and also in terms of how they fare across a wide range of hyperparameter choices. Our results suggest to practitioners that the choice of optimizer can be guided by practical considerations like memory constraints and ease of implementation, as no single algorithm emerged as a clear winner in terms of performance or stability to hyperparameter misspecification. Given our findings, we further dissect these approaches, examining two simplified versions of Adam: a) signed momentum (Signum) which we see recovers both the performance and hyperparameter stability of Adam and b) Adalayer, a layerwise variant of Adam which we introduce to study Adam's preconditioning. Examining Adalayer leads us to the conclusion that the largest impact of Adam's preconditioning is restricted to the last layer and LayerNorm parameters, and, perhaps surprisingly, the remaining layers can be trained with SGD.
[ "['Rosie Zhao' 'Depen Morwani' 'David Brandfonbrener' 'Nikhil Vyas'\n 'Sham Kakade']" ]
null
null
2407.07982
null
null
http://arxiv.org/pdf/2407.07982v1
2024-07-10T18:29:22Z
2024-07-10T18:29:22Z
Automating Weak Label Generation for Data Programming with Clinicians in the Loop
Large Deep Neural Networks (DNNs) are often data hungry and need high-quality labeled data in copious amounts for learning to converge. This is a challenge in the field of medicine since high quality labeled data is often scarce. Data programming has been the ray of hope in this regard, since it allows us to label unlabeled data using multiple weak labeling functions. Such functions are often supplied by a domain expert. Data-programming can combine multiple weak labeling functions and suggest labels better than simple majority voting over the different functions. However, it is not straightforward to express such weak labeling functions, especially in high-dimensional settings such as images and time-series data. What we propose in this paper is a way to bypass this issue, using distance functions. In high-dimensional spaces, it is easier to find meaningful distance metrics which can generalize across different labeling tasks. We propose an algorithm that queries an expert for labels of a few representative samples of the dataset. These samples are carefully chosen by the algorithm to capture the distribution of the dataset. The labels assigned by the expert on the representative subset induce a labeling on the full dataset, thereby generating weak labels to be used in the data programming pipeline. In our medical time series case study, labeling a subset of 50 to 130 out of 3,265 samples showed 17-28% improvement in accuracy and 13-28% improvement in F1 over the baseline using clinician-defined labeling functions. In our medical image case study, labeling a subset of about 50 to 120 images from 6,293 unlabeled medical images using our approach showed significant improvement over the baseline method, Snuba, with an increase of approximately 5-15% in accuracy and 12-19% in F1 score.
[ "['Jean Park' 'Sydney Pugh' 'Kaustubh Sridhar' 'Mengyu Liu' 'Navish Yarna'\n 'Ramneet Kaur' 'Souradeep Dutta' 'Elena Bernardis' 'Oleg Sokolsky'\n 'Insup Lee']" ]
null
null
2407.07997
null
null
http://arxiv.org/pdf/2407.07997v1
2024-07-10T19:02:11Z
2024-07-10T19:02:11Z
ICD Codes are Insufficient to Create Datasets for Machine Learning: An Evaluation Using All of Us Data for Coccidioidomycosis and Myocardial Infarction
In medicine, machine learning (ML) datasets are often built using the International Classification of Diseases (ICD) codes. As new models are being developed, there is a need for larger datasets. However, ICD codes are intended for billing. We aim to determine how suitable ICD codes are for creating datasets to train ML models. We focused on a rare and common disease using the All of Us database. First, we compared the patient cohort created using ICD codes for Valley fever (coccidioidomycosis, CM) with that identified via serological confirmation. Second, we compared two similarly created patient cohorts for myocardial infarction (MI) patients. We identified significant discrepancies between these two groups, and the patient overlap was small. The CM cohort had 811 patients in the ICD-10 group, 619 patients in the positive-serology group, and 24 with both. The MI cohort had 14,875 patients in the ICD-10 group, 23,598 in the MI laboratory-confirmed group, and 6,531 in both. Demographics, rates of disease symptoms, and other clinical data varied across our case study cohorts.
[ "['Abigail E. Whitlock' 'Gondy Leroy' 'Fariba M. Donovan'\n 'John N. Galgiani']" ]
null
null
2407.07998
null
null
http://arxiv.org/pdf/2407.07998v1
2024-07-10T19:02:19Z
2024-07-10T19:02:19Z
What's the score? Automated Denoising Score Matching for Nonlinear Diffusions
Reversing a diffusion process by learning its score forms the heart of diffusion-based generative modeling and for estimating properties of scientific systems. The diffusion processes that are tractable center on linear processes with a Gaussian stationary distribution. This limits the kinds of models that can be built to those that target a Gaussian prior or more generally limits the kinds of problems that can be generically solved to those that have conditionally linear score functions. In this work, we introduce a family of tractable denoising score matching objectives, called local-DSM, built using local increments of the diffusion process. We show how local-DSM melded with Taylor expansions enables automated training and score estimation with nonlinear diffusion processes. To demonstrate these ideas, we use automated-DSM to train generative models using non-Gaussian priors on challenging low dimensional distributions and the CIFAR10 image dataset. Additionally, we use the automated-DSM to learn the scores for nonlinear processes studied in statistical physics.
[ "['Raghav Singhal' 'Mark Goldstein' 'Rajesh Ranganath']" ]
null
null
2407.08003
null
null
http://arxiv.org/pdf/2407.08003v1
2024-07-10T19:17:23Z
2024-07-10T19:17:23Z
Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data
Amyotrophic Lateral Sclerosis (ALS) is characterized as a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options in the realm of medical interventions and therapies. The disease showcases a diverse range of onset patterns and progression trajectories, emphasizing the critical importance of early detection of functional decline to enable tailored care strategies and timely therapeutic interventions. The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app. This data is used to construct various machine learning models specifically designed to forecast the advancement of the ALS Functional Rating Scale-Revised (ALSFRS-R) score, leveraging the dataset provided by the organizers. In our analysis, multiple predictive models were evaluated to determine their efficacy in handling ALS sensor data. The temporal aspect of the sensor data was compressed and amalgamated using statistical methods, thereby augmenting the interpretability and applicability of the gathered information for predictive modeling objectives. The models that demonstrated optimal performance were a naive baseline and ElasticNet regression. The naive model achieved a Mean Absolute Error (MAE) of 0.20 and a Root Mean Square Error (RMSE) of 0.49, slightly outperforming the ElasticNet model, which recorded an MAE of 0.22 and an RMSE of 0.50. Our comparative analysis suggests that while the naive approach yielded marginally better predictive accuracy, the ElasticNet model provides a robust framework for understanding feature contributions.
[ "['Ritesh Mehta' 'Aleksandar Pramov' 'Shashank Verma']" ]
null
null
2407.08010
null
null
http://arxiv.org/pdf/2407.08010v1
2024-07-10T19:35:44Z
2024-07-10T19:35:44Z
A New Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction
This paper proposes a new self-organizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO) for multi-step time series prediction. Differing from the traditional six-layer IT2FNN, a nine-layer network is developed to improve prediction accuracy, uncertainty handling and model interpretability. First, a new co-antecedent layer and a modified consequent layer are devised to improve the interpretability of the fuzzy model for multi-step predictions. Second, a new transformation layer is designed to address the potential issues in the vanished rule firing strength caused by highdimensional inputs. Third, a new link layer is proposed to build temporal connections between multi-step predictions. Furthermore, a two-stage self-organizing mechanism is developed to automatically generate the fuzzy rules, in which the first stage is used to create the rule base from empty and perform the initial optimization, while the second stage is to fine-tune all network parameters. Finally, various simulations are carried out on chaotic and microgrid time series prediction problems, demonstrating the superiority of our approach in terms of prediction accuracy, uncertainty handling and model interpretability.
[ "['Fulong Yao' 'Wanqing Zhao' 'Matthew Forshaw' 'Yang Song']" ]
null
null
2407.08022
null
null
http://arxiv.org/pdf/2407.08022v1
2024-07-10T20:00:22Z
2024-07-10T20:00:22Z
Deep Reinforcement Learning for Sequential Combinatorial Auctions
Revenue-optimal auction design is a challenging problem with significant theoretical and practical implications. Sequential auction mechanisms, known for their simplicity and strong strategyproofness guarantees, are often limited by theoretical results that are largely existential, except for certain restrictive settings. Although traditional reinforcement learning methods such as Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) are applicable in this domain, they struggle with computational demands and convergence issues when dealing with large and continuous action spaces. In light of this and recognizing that we can model transitions differentiable for our settings, we propose using a new reinforcement learning framework tailored for sequential combinatorial auctions that leverages first-order gradients. Our extensive evaluations show that our approach achieves significant improvement in revenue over both analytical baselines and standard reinforcement learning algorithms. Furthermore, we scale our approach to scenarios involving up to 50 agents and 50 items, demonstrating its applicability in complex, real-world auction settings. As such, this work advances the computational tools available for auction design and contributes to bridging the gap between theoretical results and practical implementations in sequential auction design.
[ "['Sai Srivatsa Ravindranath' 'Zhe Feng' 'Di Wang' 'Manzil Zaheer'\n 'Aranyak Mehta' 'David C. Parkes']" ]
null
null
2407.08029
null
null
http://arxiv.org/pdf/2407.08029v1
2024-07-10T20:11:51Z
2024-07-10T20:11:51Z
A Critical Review of Causal Reasoning Benchmarks for Large Language Models
Numerous benchmarks aim to evaluate the capabilities of Large Language Models (LLMs) for causal inference and reasoning. However, many of them can likely be solved through the retrieval of domain knowledge, questioning whether they achieve their purpose. In this review, we present a comprehensive overview of LLM benchmarks for causality. We highlight how recent benchmarks move towards a more thorough definition of causal reasoning by incorporating interventional or counterfactual reasoning. We derive a set of criteria that a useful benchmark or set of benchmarks should aim to satisfy. We hope this work will pave the way towards a general framework for the assessment of causal understanding in LLMs and the design of novel benchmarks.
[ "['Linying Yang' 'Vik Shirvaikar' 'Oscar Clivio' 'Fabian Falck']" ]
null
null
2407.08044
null
null
http://arxiv.org/pdf/2407.08044v1
2024-07-10T20:52:18Z
2024-07-10T20:52:18Z
RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization
Low-Rank Adaptation (LoRA), as a representative Parameter-Efficient Fine-Tuning (PEFT)method, significantly enhances the training efficiency by updating only a small portion of the weights in Large Language Models (LLMs). Recently, weight-only quantization techniques have also been applied to LoRA methods to reduce the memory footprint of fine-tuning. However, applying weight-activation quantization to the LoRA pipeline is under-explored, and we observe substantial performance degradation primarily due to the presence of activation outliers. In this work, we propose RoLoRA, the first LoRA-based scheme for effective weight-activation quantization. RoLoRA utilizes rotation for outlier elimination and proposes rotation-aware fine-tuning to preserve the outlier-free characteristics in rotated LLMs. Experimental results show RoLoRA consistently improves low-bit LoRA convergence and post-training quantization robustness in weight-activation settings. We evaluate RoLoRA across LLaMA2-7B/13B, LLaMA3-8B models, achieving up to 29.5% absolute accuracy gain of 4-bit weight-activation quantized LLaMA2- 13B on commonsense reasoning tasks compared to LoRA baseline. We further demonstrate its effectiveness on Large Multimodal Models (LLaVA-1.5-7B). Codes are available at https://github.com/HuangOwen/RoLoRA
[ "['Xijie Huang' 'Zechun Liu' 'Shih-Yang Liu' 'Kwang-Ting Cheng']" ]
null
null
2407.08047
null
null
http://arxiv.org/pdf/2407.08047v2
2024-07-15T02:31:15Z
2024-07-10T20:58:53Z
Spatial-Temporal Attention Model for Traffic State Estimation with Sparse Internet of Vehicles
The growing number of connected vehicles offers an opportunity to leverage internet of vehicles (IoV) data for traffic state estimation (TSE) which plays a crucial role in intelligent transportation systems (ITS). By utilizing only a portion of IoV data instead of the entire dataset, the significant overheads associated with collecting and processing large amounts of data can be avoided. In this paper, we introduce a novel framework that utilizes sparse IoV data to achieve cost-effective TSE. Particularly, we propose a novel spatial-temporal attention model called the convolutional retentive network (CRNet) to improve the TSE accuracy by mining spatial-temporal traffic state correlations. The model employs the convolutional neural network (CNN) for spatial correlation aggregation and the retentive network (RetNet) based on the attention mechanism to extract temporal correlations. Extensive simulations on a real-world IoV dataset validate the advantage of the proposed TSE approach in achieving accurate TSE using sparse IoV data, demonstrating its cost effectiveness and practicality for real-world applications.
[ "['Jianzhe Xue' 'Dongcheng Yuan' 'Yu Sun' 'Tianqi Zhang' 'Wenchao Xu'\n 'Haibo Zhou' 'Xuemin' 'Shen']" ]
null
null
2407.08056
null
null
http://arxiv.org/pdf/2407.08056v1
2024-07-10T21:25:51Z
2024-07-10T21:25:51Z
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Dealing with multi-task trade-offs during inference can be addressed via Pareto Front Learning (PFL) methods that parameterize the Pareto Front with a single model, contrary to traditional Multi-Task Learning (MTL) approaches that optimize for a single trade-off which has to be decided prior to training. However, recent PFL methodologies suffer from limited scalability, slow convergence and excessive memory requirements compared to MTL approaches while exhibiting inconsistent mappings from preference space to objective space. In this paper, we introduce PaLoRA, a novel parameter-efficient method that augments the original model with task-specific low-rank adapters and continuously parameterizes the Pareto Front in their convex hull. Our approach dedicates the original model and the adapters towards learning general and task-specific features, respectively. Additionally, we propose a deterministic sampling schedule of preference vectors that reinforces this division of labor, enabling faster convergence and scalability to real world networks. Our experimental results show that PaLoRA outperforms MTL and PFL baselines across various datasets, scales to large networks and provides a continuous parameterization of the Pareto Front, reducing the memory overhead $23.8-31.7$ times compared with competing PFL baselines in scene understanding benchmarks.
[ "['Nikolaos Dimitriadis' 'Pascal Frossard' 'Francois Fleuret']" ]
null
null
2407.08064
null
null
http://arxiv.org/pdf/2407.08064v1
2024-07-10T21:54:12Z
2024-07-10T21:54:12Z
TinyGraph: Joint Feature and Node Condensation for Graph Neural Networks
Training graph neural networks (GNNs) on large-scale graphs can be challenging due to the high computational expense caused by the massive number of nodes and high-dimensional nodal features. Existing graph condensation studies tackle this problem only by reducing the number of nodes in the graph. However, the resulting condensed graph data can still be cumbersome. Specifically, although the nodes of the Citeseer dataset are reduced to 0.9% (30 nodes) in training, the number of features is 3,703, severely exceeding the training sample magnitude. Faced with this challenge, we study the problem of joint condensation for both features and nodes in large-scale graphs. This task is challenging mainly due to 1) the intertwined nature of the node features and the graph structure calls for the feature condensation solver to be structure-aware; and 2) the difficulty of keeping useful information in the condensed graph. To address these challenges, we propose a novel framework TinyGraph, to condense features and nodes simultaneously in graphs. Specifically, we cast the problem as matching the gradients of GNN weights trained on the condensed graph and the gradients obtained from training over the original graph, where the feature condensation is achieved by a trainable function. The condensed graph obtained by minimizing the matching loss along the training trajectory can henceforth retain critical information in the original graph. Extensive experiments were carried out to demonstrate the effectiveness of the proposed TinyGraph. For example, a GNN trained with TinyGraph retains 98.5% and 97.5% of the original test accuracy on the Cora and Citeseer datasets, respectively, while significantly reducing the number of nodes by 97.4% and 98.2%, and the number of features by 90.0% on both datasets.
[ "['Yezi Liu' 'Yanning Shen']" ]
null
null
2407.08065
null
null
http://arxiv.org/pdf/2407.08065v1
2024-07-10T21:55:44Z
2024-07-10T21:55:44Z
Towards Interpretable Foundation Models of Robot Behavior: A Task Specific Policy Generation Approach
Foundation models are a promising path toward general-purpose and user-friendly robots. The prevalent approach involves training a generalist policy that, like a reinforcement learning policy, uses observations to output actions. Although this approach has seen much success, several concerns arise when considering deployment and end-user interaction with these systems. In particular, the lack of modularity between tasks means that when model weights are updated (e.g., when a user provides feedback), the behavior in other, unrelated tasks may be affected. This can negatively impact the system's interpretability and usability. We present an alternative approach to the design of robot foundation models, Diffusion for Policy Parameters (DPP), which generates stand-alone, task-specific policies. Since these policies are detached from the foundation model, they are updated only when a user wants, either through feedback or personalization, allowing them to gain a high degree of familiarity with that policy. We demonstrate a proof-of-concept of DPP in simulation then discuss its limitations and the future of interpretable foundation models.
[ "['Isaac Sheidlower' 'Reuben Aronson' 'Elaine Schaertl Short']" ]
null
null
2407.08073
null
null
http://arxiv.org/pdf/2407.08073v1
2024-07-10T22:26:45Z
2024-07-10T22:26:45Z
NDST: Neural Driving Style Transfer for Human-Like Vision-Based Autonomous Driving
Autonomous Vehicles (AV) and Advanced Driver Assistant Systems (ADAS) prioritize safety over comfort. The intertwining factors of safety and comfort emerge as pivotal elements in ensuring the effectiveness of Autonomous Driving (AD). Users often experience discomfort when AV or ADAS drive the vehicle on their behalf. Providing a personalized human-like AD experience, tailored to match users' unique driving styles while adhering to safety prerequisites, presents a significant opportunity to boost the acceptance of AVs. This paper proposes a novel approach, Neural Driving Style Transfer (NDST), inspired by Neural Style Transfer (NST), to address this issue. NDST integrates a Personalized Block (PB) into the conventional Baseline Driving Model (BDM), allowing for the transfer of a user's unique driving style while adhering to safety parameters. The PB serves as a self-configuring system, learning and adapting to an individual's driving behavior without requiring modifications to the BDM. This approach enables the personalization of AV models, aligning the driving style more closely with user preferences while ensuring baseline safety critical actuation. Two contrasting driving styles (Style A and Style B) were used to validate the proposed NDST methodology, demonstrating its efficacy in transferring personal driving styles to the AV system. Our work highlights the potential of NDST to enhance user comfort in AVs by providing a personalized and familiar driving experience. The findings affirm the feasibility of integrating NDST into existing AV frameworks to bridge the gap between safety and individualized driving styles, promoting wider acceptance and improved user experiences.
[ "['Donghyun Kim' 'Aws Khalil' 'Haewoon Nam' 'Jaerock Kwon']" ]
null
null
2407.08074
null
null
http://arxiv.org/abs/2407.08074v1
2024-07-10T22:28:13Z
2024-07-10T22:28:13Z
Smooth Like Butter: Evaluating Multi-Lattice Transitions in Property-Augmented Latent Spaces
Additive manufacturing has revolutionized structural optimization by enhancing component strength and reducing material requirements. One approach used to achieve these improvements is the application of multi-lattice structures, where the macro-scale performance relies on the detailed design of mesostructural lattice elements. Many current approaches to designing such structures use data-driven design to generate multi-lattice transition regions, making use of machine learning models that are informed solely by the geometry of the mesostructures. However, it remains unclear if the integration of mechanical properties into the dataset used to train such machine learning models would be beneficial beyond using geometric data alone. To address this issue, this work implements and evaluates a hybrid geometry/property Variational Autoencoder (VAE) for generating multi-lattice transition regions. In our study, we found that hybrid VAEs demonstrate enhanced performance in maintaining stiffness continuity through transition regions, indicating their suitability for design tasks requiring smooth mechanical properties.
[ "['Martha Baldwin' 'Nicholas A. Meisel' 'Christopher McComb']" ]
null
null
2407.08086
null
null
http://arxiv.org/pdf/2407.08086v1
2024-07-10T23:09:23Z
2024-07-10T23:09:23Z
The GeometricKernels Package: Heat and Matérn Kernels for Geometric Learning on Manifolds, Meshes, and Graphs
Kernels are a fundamental technical primitive in machine learning. In recent years, kernel-based methods such as Gaussian processes are becoming increasingly important in applications where quantifying uncertainty is of key interest. In settings that involve structured data defined on graphs, meshes, manifolds, or other related spaces, defining kernels with good uncertainty-quantification behavior, and computing their value numerically, is less straightforward than in the Euclidean setting. To address this difficulty, we present GeometricKernels, a software package which implements the geometric analogs of classical Euclidean squared exponential - also known as heat - and Mat'ern kernels, which are widely-used in settings where uncertainty is of key interest. As a byproduct, we obtain the ability to compute Fourier-feature-type expansions, which are widely used in their own right, on a wide set of geometric spaces. Our implementation supports automatic differentiation in every major current framework simultaneously via a backend-agnostic design. In this companion paper to the package and its documentation, we outline the capabilities of the package and present an illustrated example of its interface. We also include a brief overview of the theory the package is built upon and provide some historic context in the appendix.
[ "['Peter Mostowsky' 'Vincent Dutordoir' 'Iskander Azangulov'\n 'Noémie Jaquier' 'Michael John Hutchinson' 'Aditya Ravuri' 'Leonel Rozo'\n 'Alexander Terenin' 'Viacheslav Borovitskiy']" ]
null
null
2407.08094
null
null
http://arxiv.org/pdf/2407.08094v2
2024-07-14T14:38:16Z
2024-07-10T23:45:20Z
Density Estimation via Binless Multidimensional Integration
We introduce the Binless Multidimensional Thermodynamic Integration (BMTI) method for nonparametric, robust, and data-efficient density estimation. BMTI estimates the logarithm of the density by initially computing log-density differences between neighbouring data points. Subsequently, such differences are integrated, weighted by their associated uncertainties, using a maximum-likelihood formulation. This procedure can be seen as an extension to a multidimensional setting of the thermodynamic integration, a technique developed in statistical physics. The method leverages the manifold hypothesis, estimating quantities within the intrinsic data manifold without defining an explicit coordinate map. It does not rely on any binning or space partitioning, but rather on the construction of a neighbourhood graph based on an adaptive bandwidth selection procedure. BMTI mitigates the limitations commonly associated with traditional nonparametric density estimators, effectively reconstructing smooth profiles even in high-dimensional embedding spaces. The method is tested on a variety of complex synthetic high-dimensional datasets, where it is shown to outperform traditional estimators, and is benchmarked on realistic datasets from the chemical physics literature.
[ "['Matteo Carli' 'Alex Rodriguez' 'Alessandro Laio' 'Aldo Glielmo']" ]
null
null
2407.08100
null
null
http://arxiv.org/pdf/2407.08100v1
2024-07-11T00:10:35Z
2024-07-11T00:10:35Z
Non-convergence of Adam and other adaptive stochastic gradient descent optimization methods for non-vanishing learning rates
Deep learning algorithms - typically consisting of a class of deep neural networks trained by a stochastic gradient descent (SGD) optimization method - are nowadays the key ingredients in many artificial intelligence (AI) systems and have revolutionized our ways of working and living in modern societies. For example, SGD methods are used to train powerful large language models (LLMs) such as versions of ChatGPT and Gemini, SGD methods are employed to create successful generative AI based text-to-image creation models such as Midjourney, DALL-E, and Stable Diffusion, but SGD methods are also used to train DNNs to approximately solve scientific models such as partial differential equation (PDE) models from physics and biology and optimal control and stopping problems from engineering. It is known that the plain vanilla standard SGD method fails to converge even in the situation of several convex optimization problems if the learning rates are bounded away from zero. However, in many practical relevant training scenarios, often not the plain vanilla standard SGD method but instead adaptive SGD methods such as the RMSprop and the Adam optimizers, in which the learning rates are modified adaptively during the training process, are employed. This naturally rises the question whether such adaptive optimizers, in which the learning rates are modified adaptively during the training process, do converge in the situation of non-vanishing learning rates. In this work we answer this question negatively by proving that adaptive SGD methods such as the popular Adam optimizer fail to converge to any possible random limit point if the learning rates are asymptotically bounded away from zero. In our proof of this non-convergence result we establish suitable pathwise a priori bounds for a class of accelerated and adaptive SGD methods, which are also of independent interest.
[ "['Steffen Dereich' 'Robin Graeber' 'Arnulf Jentzen']" ]
null
null
2407.08107
null
null
http://arxiv.org/pdf/2407.08107v1
2024-07-11T00:51:32Z
2024-07-11T00:51:32Z
Advanced Meta-Ensemble Machine Learning Models for Early and Accurate Sepsis Prediction to Improve Patient Outcomes
Sepsis, a critical condition from the body's response to infection, poses a major global health crisis affecting all age groups. Timely detection and intervention are crucial for reducing healthcare expenses and improving patient outcomes. This paper examines the limitations of traditional sepsis screening tools like Systemic Inflammatory Response Syndrome, Modified Early Warning Score, and Quick Sequential Organ Failure Assessment, highlighting the need for advanced approaches. We propose using machine learning techniques - Random Forest, Extreme Gradient Boosting, and Decision Tree models - to predict sepsis onset. Our study evaluates these models individually and in a combined meta-ensemble approach using key metrics such as Accuracy, Precision, Recall, F1 score, and Area Under the Receiver Operating Characteristic Curve. Results show that the meta-ensemble model outperforms individual models, achieving an AUC-ROC score of 0.96, indicating superior predictive accuracy for early sepsis detection. The Random Forest model also performs well with an AUC-ROC score of 0.95, while Extreme Gradient Boosting and Decision Tree models score 0.94 and 0.90, respectively.
[ "['MohammadAmin Ansari Khoushabar' 'Parviz Ghafariasl']" ]
null
null
2407.08108
null
null
http://arxiv.org/pdf/2407.08108v1
2024-07-11T00:54:56Z
2024-07-11T00:54:56Z
CADC: Encoding User-Item Interactions for Compressing Recommendation Model Training Data
Deep learning recommendation models (DLRMs) are at the heart of the current e-commerce industry. However, the amount of training data used to train these large models is growing exponentially, leading to substantial training hurdles. The training dataset contains two primary types of information: content-based information (features of users and items) and collaborative information (interactions between users and items). One approach to reduce the training dataset is to remove user-item interactions. But that significantly diminishes collaborative information, which is crucial for maintaining accuracy due to its inclusion of interaction histories. This loss profoundly impacts DLRM performance. This paper makes an important observation that if one can capture the user-item interaction history to enrich the user and item embeddings, then the interaction history can be compressed without losing model accuracy. Thus, this work, Collaborative Aware Data Compression (CADC), takes a two-step approach to training dataset compression. In the first step, we use matrix factorization of the user-item interaction matrix to create a novel embedding representation for both the users and items. Once the user and item embeddings are enriched by the interaction history information the approach then applies uniform random sampling of the training dataset to drastically reduce the training dataset size while minimizing model accuracy drop. The source code of CADC is available at href{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}{https://anonymous.4open.science/r/DSS-RM-8C1D/README.md}.
[ "['Hossein Entezari Zarch' 'Abdulla Alshabanah' 'Chaoyi Jiang'\n 'Murali Annavaram']" ]
null
null
2407.08109
null
null
http://arxiv.org/pdf/2407.08109v1
2024-07-11T01:03:02Z
2024-07-11T01:03:02Z
Urban Waterlogging Detection: A Challenging Benchmark and Large-Small Model Co-Adapter
Urban waterlogging poses a major risk to public safety and infrastructure. Conventional methods using water-level sensors need high-maintenance to hardly achieve full coverage. Recent advances employ surveillance camera imagery and deep learning for detection, yet these struggle amidst scarce data and adverse environmental conditions. In this paper, we establish a challenging Urban Waterlogging Benchmark (UW-Bench) under diverse adverse conditions to advance real-world applications. We propose a Large-Small Model co-adapter paradigm (LSM-adapter), which harnesses the substantial generic segmentation potential of large model and the specific task-directed guidance of small model. Specifically, a Triple-S Prompt Adapter module alongside a Dynamic Prompt Combiner are proposed to generate then merge multiple prompts for mask decoder adaptation. Meanwhile, a Histogram Equalization Adap-ter module is designed to infuse the image specific information for image encoder adaptation. Results and analysis show the challenge and superiority of our developed benchmark and algorithm. Project page: url{https://github.com/zhang-chenxu/LSM-Adapter}
[ "['Suqi Song' 'Chenxu Zhang' 'Peng Zhang' 'Pengkun Li' 'Fenglong Song'\n 'Lei Zhang']" ]
null
null
2407.08112
null
null
http://arxiv.org/pdf/2407.08112v1
2024-07-11T01:08:39Z
2024-07-11T01:08:39Z
How Well Can a Long Sequence Model Model Long Sequences? Comparing Architechtural Inductive Biases on Long-Context Abilities
Long sequences occur in abundance within real-world scenarios, hence properly modelling them opens numerous down-stream use-cases. Deep neural networks, however, have often struggled with these for a variety of reasons. Recent advances, both in system engineering as well as model design, have enabled the scaling up of model that are purported to support extended context length. In particular, the state-space and linear recurrent neural network families of models hypothetically can entend to infinite sequence lenth. However, is this too good to be true? We conduct an evaluation to show that while such claims may be sound theoretically, there remain large practical gaps that are empirically observed. In particular, recurrent models still suffer in the same settings as long-context LLMs with attention. We further show that different inductive biases have inconsistent extrapolation capabilities, highlighting the need to further study such paradigms and investigate why long-context models seemingly fail to behave as one might expect.
[ "['Jerry Huang']" ]
null
null
2407.08125
null
null
http://arxiv.org/pdf/2407.08125v1
2024-07-11T01:56:31Z
2024-07-11T01:56:31Z
Real-Time Summarization of Twitter
In this paper, we describe our approaches to TREC Real-Time Summarization of Twitter. We focus on real time push notification scenario, which requires a system monitors the stream of sampled tweets and returns the tweets relevant and novel to given interest profiles. Dirichlet score with and with very little smoothing (baseline) are employed to classify whether a tweet is relevant to a given interest profile. Using metrics including Mean Average Precision (MAP, cumulative gain (CG) and discount cumulative gain (DCG), the experiment indicates that our approach has a good performance. It is also desired to remove the redundant tweets from the pushing queue. Due to the precision limit, we only describe the algorithm in this paper.
[ "['Yixin Jin' 'Meiqi Wang' 'Meng Li' 'Wenjing Zhou' 'Yi Shen' 'Hao Liu']" ]
null
null
2407.08134
null
null
http://arxiv.org/pdf/2407.08134v1
2024-07-11T02:15:21Z
2024-07-11T02:15:21Z
Highway Networks for Improved Surface Reconstruction: The Role of Residuals and Weight Updates
Surface reconstruction from point clouds is a fundamental challenge in computer graphics and medical imaging. In this paper, we explore the application of advanced neural network architectures for the accurate and efficient reconstruction of surfaces from data points. We introduce a novel variant of the Highway network (Hw) called Square-Highway (SqrHw) within the context of multilayer perceptrons and investigate its performance alongside plain neural networks and a simplified Hw in various numerical examples. These examples include the reconstruction of simple and complex surfaces, such as spheres, human hands, and intricate models like the Stanford Bunny. We analyze the impact of factors such as the number of hidden layers, interior and exterior points, and data distribution on surface reconstruction quality. Our results show that the proposed SqrHw architecture outperforms other neural network configurations, achieving faster convergence and higher-quality surface reconstructions. Additionally, we demonstrate the SqrHw's ability to predict surfaces over missing data, a valuable feature for challenging applications like medical imaging. Furthermore, our study delves into further details, demonstrating that the proposed method based on highway networks yields more stable weight norms and backpropagation gradients compared to the Plain Network architecture. This research not only advances the field of computer graphics but also holds utility for other purposes such as function interpolation and physics-informed neural networks, which integrate multilayer perceptrons into their algorithms.
[ "['A. Noorizadegan' 'Y. C. Hon' 'D. L. Young' 'C. S. Chen']" ]
null
null
2407.08152
null
null
http://arxiv.org/pdf/2407.08152v1
2024-07-11T03:10:27Z
2024-07-11T03:10:27Z
Privacy-Preserving Data Deduplication for Enhancing Federated Learning of Language Models
Deduplication is a vital preprocessing step that enhances machine learning model performance and saves training time and energy. However, enhancing federated learning through deduplication poses challenges, especially regarding scalability and potential privacy violations if deduplication involves sharing all clients' data. In this paper, we address the problem of deduplication in a federated setup by introducing a pioneering protocol, Efficient Privacy-Preserving Multi-Party Deduplication (EP-MPD). It efficiently removes duplicates from multiple clients' datasets without compromising data privacy. EP-MPD is constructed in a modular fashion, utilizing two novel variants of the Private Set Intersection protocol. Our extensive experiments demonstrate the significant benefits of deduplication in federated learning of large language models. For instance, we observe up to 19.61% improvement in perplexity and up to 27.95% reduction in running time. EP-MPD effectively balances privacy and performance in federated learning, making it a valuable solution for large-scale applications.
[ "['Aydin Abadi' 'Vishnu Asutosh Dasu' 'Sumanta Sarkar']" ]
null
null
2407.08159
null
null
http://arxiv.org/pdf/2407.08159v1
2024-07-11T03:25:40Z
2024-07-11T03:25:40Z
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
The training phase of machine learning models is a delicate step, especially in cybersecurity contexts. Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks without altering the training labels. With this work, we propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks, while preserving the model utility. By performing density-based clustering on a carefully chosen feature subspace, and progressively isolating the suspicious clusters through a novel iterative scoring procedure, our defensive mechanism can mitigate the attacks without requiring many of the common assumptions in the existing backdoor defense literature. To show the generality of our proposed mitigation, we evaluate it on two clean-label model-agnostic attacks on two different classic cybersecurity data modalities: network flows classification and malware classification, using gradient boosting and neural network models.
[ "['Giorgio Severi' 'Simona Boboila' 'John Holodnak' 'Kendra Kratkiewicz'\n 'Rauf Izmailov' 'Alina Oprea']" ]
null
null
2407.08166
null
null
http://arxiv.org/pdf/2407.08166v1
2024-07-11T04:11:52Z
2024-07-11T04:11:52Z
Synthetic Electroretinogram Signal Generation Using Conditional Generative Adversarial Network for Enhancing Classification of Autism Spectrum Disorder
The electroretinogram (ERG) is a clinical test that records the retina's electrical response to light. The ERG is a promising way to study different neurodevelopmental and neurodegenerative disorders, including autism spectrum disorder (ASD) - a neurodevelopmental condition that impacts language, communication, and reciprocal social interactions. However, in heterogeneous populations, such as ASD, where the ability to collect large datasets is limited, the application of artificial intelligence (AI) is complicated. Synthetic ERG signals generated from real ERG recordings carry similar information as natural ERGs and, therefore, could be used as an extension for natural data to increase datasets so that AI applications can be fully utilized. As proof of principle, this study presents a Generative Adversarial Network capable of generating synthetic ERG signals of children with ASD and typically developing control individuals. We applied a Time Series Transformer and Visual Transformer with Continuous Wavelet Transform to enhance classification results on the extended synthetic signals dataset. This approach may support classification models in related psychiatric conditions where the ERG may help classify disorders.
[ "['Mikhail Kulyabin' 'Paul A. Constable' 'Aleksei Zhdanov' 'Irene O. Lee'\n 'David H. Skuse' 'Dorothy A. Thompson' 'Andreas Maier']" ]
null
null
2407.08169
null
null
http://arxiv.org/pdf/2407.08169v1
2024-07-11T04:19:28Z
2024-07-11T04:19:28Z
Faster Machine Unlearning via Natural Gradient Descent
We address the challenge of efficiently and reliably deleting data from machine learning models trained using Empirical Risk Minimization (ERM), a process known as machine unlearning. To avoid retraining models from scratch, we propose a novel algorithm leveraging Natural Gradient Descent (NGD). Our theoretical framework ensures strong privacy guarantees for convex models, while a practical Min/Max optimization algorithm is developed for non-convex models. Comprehensive evaluations show significant improvements in privacy, computational efficiency, and generalization compared to state-of-the-art methods, advancing both the theoretical and practical aspects of machine unlearning.
[ "['Omri Lev' 'Ashia Wilson']" ]
null
null
2407.08176
null
null
http://arxiv.org/pdf/2407.08176v1
2024-07-11T04:40:02Z
2024-07-11T04:40:02Z
Foundation Model Engineering: Engineering Foundation Models Just as Engineering Software
By treating data and models as the source code, Foundation Models (FMs) become a new type of software. Mirroring the concept of software crisis, the increasing complexity of FMs making FM crisis a tangible concern in the coming decade, appealing for new theories and methodologies from the field of software engineering. In this paper, we outline our vision of introducing Foundation Model (FM) engineering, a strategic response to the anticipated FM crisis with principled engineering methodologies. FM engineering aims to mitigate potential issues in FM development and application through the introduction of declarative, automated, and unified programming interfaces for both data and model management, reducing the complexities involved in working with FMs by providing a more structured and intuitive process for developers. Through the establishment of FM engineering, we aim to provide a robust, automated, and extensible framework that addresses the imminent challenges, and discovering new research opportunities for the software engineering field.
[ "['Dezhi Ran' 'Mengzhou Wu' 'Wei Yang' 'Tao Xie']" ]
null
null
2407.08179
null
null
http://arxiv.org/pdf/2407.08179v1
2024-07-11T04:50:51Z
2024-07-11T04:50:51Z
CoGS: Causality Constrained Counterfactual Explanations using goal-directed ASP
Machine learning models are increasingly used in areas such as loan approvals and hiring, yet they often function as black boxes, obscuring their decision-making processes. Transparency is crucial, and individuals need explanations to understand decisions, especially for the ones not desired by the user. Ethical and legal considerations require informing individuals of changes in input attribute values (features) that could lead to a desired outcome for the user. Our work aims to generate counterfactual explanations by considering causal dependencies between features. We present the CoGS (Counterfactual Generation with s(CASP)) framework that utilizes the goal-directed Answer Set Programming system s(CASP) to generate counterfactuals from rule-based machine learning models, specifically the FOLD-SE algorithm. CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account. It finds a path from an undesired outcome to a desired one using counterfactuals. We present details of the CoGS framework along with its evaluation.
[ "['Sopam Dasgupta' 'Joaquín Arias' 'Elmer Salazar' 'Gopal Gupta']" ]
null
null
2407.08188
null
null
http://arxiv.org/pdf/2407.08188v1
2024-07-11T05:13:27Z
2024-07-11T05:13:27Z
Position: Measure Dataset Diversity, Don't Just Claim It
Machine learning (ML) datasets, often perceived as neutral, inherently encapsulate abstract and disputed social constructs. Dataset curators frequently employ value-laden terms such as diversity, bias, and quality to characterize datasets. Despite their prevalence, these terms lack clear definitions and validation. Our research explores the implications of this issue by analyzing "diversity" across 135 image and text datasets. Drawing from social sciences, we apply principles from measurement theory to identify considerations and offer recommendations for conceptualizing, operationalizing, and evaluating diversity in datasets. Our findings have broader implications for ML research, advocating for a more nuanced and precise approach to handling value-laden properties in dataset construction.
[ "['Dora Zhao' 'Jerone T. A. Andrews' 'Orestis Papakyriakopoulos'\n 'Alice Xiang']" ]
null
null
2407.08192
null
null
http://arxiv.org/pdf/2407.08192v1
2024-07-11T05:22:04Z
2024-07-11T05:22:04Z
ARCO:Adaptive Multi-Agent Reinforcement Learning-Based Hardware/Software Co-Optimization Compiler for Improved Performance in DNN Accelerator Design
This paper presents ARCO, an adaptive Multi-Agent Reinforcement Learning (MARL)-based co-optimizing compilation framework designed to enhance the efficiency of mapping machine learning (ML) models - such as Deep Neural Networks (DNNs) - onto diverse hardware platforms. The framework incorporates three specialized actor-critic agents within MARL, each dedicated to a distinct aspect of compilation/optimization at an abstract level: one agent focuses on hardware, while two agents focus on software optimizations. This integration results in a collaborative hardware/software co-optimization strategy that improves the precision and speed of DNN deployments. Concentrating on high-confidence configurations simplifies the search space and delivers superior performance compared to current optimization methods. The ARCO framework surpasses existing leading frameworks, achieving a throughput increase of up to 37.95% while reducing the optimization time by up to 42.2% across various DNNs.
[ "['Arya Fayyazi' 'Mehdi Kamal' 'Massoud Pedram']" ]
null
null
2407.08205
null
null
http://arxiv.org/pdf/2407.08205v1
2024-07-11T06:12:04Z
2024-07-11T06:12:04Z
OPIMA: Optical Processing-In-Memory for Convolutional Neural Network Acceleration
Recent advances in machine learning (ML) have spotlighted the pressing need for computing architectures that bridge the gap between memory bandwidth and processing power. The advent of deep neural networks has pushed traditional Von Neumann architectures to their limits due to the high latency and energy consumption costs associated with data movement between the processor and memory for these workloads. One of the solutions to overcome this bottleneck is to perform computation within the main memory through processing-in-memory (PIM), thereby limiting data movement and the costs associated with it. However, DRAM-based PIM struggles to achieve high throughput and energy efficiency due to internal data movement bottlenecks and the need for frequent refresh operations. In this work, we introduce OPIMA, a PIM-based ML accelerator, architected within an optical main memory. OPIMA has been designed to leverage the inherent massive parallelism within main memory while performing high-speed, low-energy optical computation to accelerate ML models based on convolutional neural networks. We present a comprehensive analysis of OPIMA to guide design choices and operational mechanisms. Additionally, we evaluate the performance and energy consumption of OPIMA, comparing it with conventional electronic computing systems and emerging photonic PIM architectures. The experimental results show that OPIMA can achieve 2.98x higher throughput and 137x better energy efficiency than the best-known prior work.
[ "['Febin Sunny' 'Amin Shafiee' 'Abhishek Balasubramaniam' 'Mahdi Nikdast'\n 'Sudeep Pasricha']" ]
null
null
2407.08214
null
null
http://arxiv.org/pdf/2407.08214v1
2024-07-11T06:31:04Z
2024-07-11T06:31:04Z
Towards stable training of parallel continual learning
Parallel Continual Learning (PCL) tasks investigate the training methods for continual learning with multi-source input, where data from different tasks are learned as they arrive. PCL offers high training efficiency and is well-suited for complex multi-source data systems, such as autonomous vehicles equipped with multiple sensors. However, at any time, multiple tasks need to be trained simultaneously, leading to severe training instability in PCL. This instability manifests during both forward and backward propagation, where features are entangled and gradients are conflict. This paper introduces Stable Parallel Continual Learning (SPCL), a novel approach that enhances the training stability of PCL for both forward and backward propagation. For the forward propagation, we apply Doubly-block Toeplit (DBT) Matrix based orthogonality constraints to network parameters to ensure stable and consistent propagation. For the backward propagation, we employ orthogonal decomposition for gradient management stabilizes backpropagation and mitigates gradient conflicts across tasks. By optimizing gradients by ensuring orthogonality and minimizing the condition number, SPCL effectively stabilizing the gradient descent in complex optimization tasks. Experimental results demonstrate that SPCL outperforms state-of-the-art methjods and achieve better training stability.
[ "['Li Yuepan' 'Fan Lyu' 'Yuyang Li' 'Wei Feng' 'Guangcan Liu'\n 'Fanhua Shang']" ]
null
null
2407.08215
null
null
http://arxiv.org/pdf/2407.08215v1
2024-07-11T06:33:11Z
2024-07-11T06:33:11Z
Enhancing Performance and User Engagement in Everyday Stress Monitoring: A Context-Aware Active Reinforcement Learning Approach
In today's fast-paced world, accurately monitoring stress levels is crucial. Sensor-based stress monitoring systems often need large datasets for training effective models. However, individual-specific models are necessary for personalized and interactive scenarios. Traditional methods like Ecological Momentary Assessments (EMAs) assess stress but struggle with efficient data collection without burdening users. The challenge is to timely send EMAs, especially during stress, balancing monitoring efficiency and user convenience. This paper introduces a novel context-aware active reinforcement learning (RL) algorithm for enhanced stress detection using Photoplethysmography (PPG) data from smartwatches and contextual data from smartphones. Our approach dynamically selects optimal times for deploying EMAs, utilizing the user's immediate context to maximize label accuracy and minimize intrusiveness. Initially, the study was executed in an offline environment to refine the label collection process, aiming to increase accuracy while reducing user burden. Later, we integrated a real-time label collection mechanism, transitioning to an online methodology. This shift resulted in an 11% improvement in stress detection efficiency. Incorporating contextual data improved model accuracy by 4%. Personalization studies indicated a 10% enhancement in AUC-ROC scores, demonstrating better stress level differentiation. This research marks a significant move towards personalized, context-driven real-time stress monitoring methods.
[ "['Seyed Amir Hossein Aqajari' 'Ziyu Wang' 'Ali Tazarv' 'Sina Labbaf'\n 'Salar Jafarlou' 'Brenda Nguyen' 'Nikil Dutt' 'Marco Levorato'\n 'Amir M. Rahmani']" ]
null
null
2407.08227
null
null
http://arxiv.org/pdf/2407.08227v1
2024-07-11T07:01:50Z
2024-07-11T07:01:50Z
DALL-M: Context-Aware Clinical Data Augmentation with LLMs
X-ray images are vital in medical diagnostics, but their effectiveness is limited without clinical context. Radiologists often find chest X-rays insufficient for diagnosing underlying diseases, necessitating comprehensive clinical features and data integration. We present a novel technique to enhance the clinical context through augmentation techniques with clinical tabular data, thereby improving its applicability and reliability in AI medical diagnostics. To address this, we introduce a pioneering approach to clinical data augmentation that employs large language models (LLMs) to generate patient contextual synthetic data. This methodology is crucial for training more robust deep learning models in healthcare. It preserves the integrity of real patient data while enriching the dataset with contextually relevant synthetic features, significantly enhancing model performance. DALL-M uses a three-phase feature generation process: (i) clinical context storage, (ii) expert query generation, and (iii) context-aware feature augmentation. DALL-M generates new, clinically relevant features by synthesizing chest X-ray images and reports. Applied to 799 cases using nine features from the MIMIC-IV dataset, it created an augmented set of 91 features. This is the first work to generate contextual values for existing and new features based on patients' X-ray reports, gender, and age and to produce new contextual knowledge during data augmentation. Empirical validation with machine learning models, including Decision Trees, Random Forests, XGBoost, and TabNET, showed significant performance improvements. Incorporating augmented features increased the F1 score by 16.5% and Precision and Recall by approximately 25%. DALL-M addresses a critical gap in clinical data augmentation, offering a robust framework for generating contextually enriched datasets.
[ "['Chihcheng Hsieh' 'Catarina Moreira' 'Isabel Blanco Nobre'\n 'Sandra Costa Sousa' 'Chun Ouyang' 'Margot Brereton' 'Joaquim Jorge'\n 'Jacinto C. Nascimento']" ]
null
null
2407.08232
null
null
http://arxiv.org/pdf/2407.08232v1
2024-07-11T07:14:34Z
2024-07-11T07:14:34Z
SwishReLU: A Unified Approach to Activation Functions for Enhanced Deep Neural Networks Performance
ReLU, a commonly used activation function in deep neural networks, is prone to the issue of "Dying ReLU". Several enhanced versions, such as ELU, SeLU, and Swish, have been introduced and are considered to be less commonly utilized. However, replacing ReLU can be somewhat challenging due to its inconsistent advantages. While Swish offers a smoother transition similar to ReLU, its utilization generally incurs a greater computational burden compared to ReLU. This paper proposes SwishReLU, a novel activation function combining elements of ReLU and Swish. Our findings reveal that SwishReLU outperforms ReLU in performance with a lower computational cost than Swish. This paper undertakes an examination and comparison of different types of ReLU variants with SwishReLU. Specifically, we compare ELU and SeLU along with Tanh on three datasets: CIFAR-10, CIFAR-100 and MNIST. Notably, applying SwishReLU in the VGG16 model described in Algorithm 2 yields a 6% accuracy improvement on the CIFAR-10 dataset.
[ "['Jamshaid Ul Rahman' 'Rubiqa Zulfiqar' 'Asad Khan' 'Nimra']" ]
null
null
2407.08233
null
null
http://arxiv.org/pdf/2407.08233v1
2024-07-11T07:14:40Z
2024-07-11T07:14:40Z
Differentially Private Neural Network Training under Hidden State Assumption
We present a novel approach called differentially private stochastic block coordinate descent (DP-SBCD) for training neural networks with provable guarantees of differential privacy under the hidden state assumption. Our methodology incorporates Lipschitz neural networks and decomposes the training process of the neural network into sub-problems, each corresponding to the training of a specific layer. By doing so, we extend the analysis of differential privacy under the hidden state assumption to encompass non-convex problems and algorithms employing proximal gradient descent. Furthermore, in contrast to existing methods, we adopt a novel approach by utilizing calibrated noise sampled from adaptive distributions, yielding improved empirical trade-offs between utility and privacy.
[ "['Ding Chen' 'Chen Liu']" ]
null
null
2407.08239
null
null
http://arxiv.org/pdf/2407.08239v1
2024-07-11T07:32:16Z
2024-07-11T07:32:16Z
An Unsupervised Domain Adaptation Method for Locating Manipulated Region in partially fake Audio
When the task of locating manipulation regions in partially-fake audio (PFA) involves cross-domain datasets, the performance of deep learning models drops significantly due to the shift between the source and target domains. To address this issue, existing approaches often employ data augmentation before training. However, they overlook the characteristics in target domain that are absent in source domain. Inspired by the mixture-of-experts model, we propose an unsupervised method named Samples mining with Diversity and Entropy (SDE). Our method first learns from a collection of diverse experts that achieve great performance from different perspectives in the source domain, but with ambiguity on target samples. We leverage these diverse experts to select the most informative samples by calculating their entropy. Furthermore, we introduced a label generation method tailored for these selected samples that are incorporated in the training process in source domain integrating the target domain information. We applied our method to a cross-domain partially fake audio detection dataset, ADD2023Track2. By introducing 10% of unknown samples from the target domain, we achieved an F1 score of 43.84%, which represents a relative increase of 77.2% compared to the second-best method.
[ "['Siding Zeng' 'Jiangyan Yi' 'Jianhua Tao' 'Yujie Chen' 'Shan Liang'\n 'Yong Ren' 'Xiaohui Zhang']" ]
null
null
2407.08245
null
null
http://arxiv.org/pdf/2407.08245v1
2024-07-11T07:45:10Z
2024-07-11T07:45:10Z
Feature Diversification and Adaptation for Federated Domain Generalization
Federated learning, a distributed learning paradigm, utilizes multiple clients to build a robust global model. In real-world applications, local clients often operate within their limited domains, leading to a `domain shift' across clients. Privacy concerns limit each client's learning to its own domain data, which increase the risk of overfitting. Moreover, the process of aggregating models trained on own limited domain can be potentially lead to a significant degradation in the global model performance. To deal with these challenges, we introduce the concept of federated feature diversification. Each client diversifies the own limited domain data by leveraging global feature statistics, i.e., the aggregated average statistics over all participating clients, shared through the global model's parameters. This data diversification helps local models to learn client-invariant representations while preserving privacy. Our resultant global model shows robust performance on unseen test domain data. To enhance performance further, we develop an instance-adaptive inference approach tailored for test domain data. Our proposed instance feature adapter dynamically adjusts feature statistics to align with the test input, thereby reducing the domain gap between the test and training domains. We show that our method achieves state-of-the-art performance on several domain generalization benchmarks within a federated learning setting.
[ "['Seunghan Yang' 'Seokeon Choi' 'Hyunsin Park' 'Sungha Choi'\n 'Simyung Chang' 'Sungrack Yun']" ]
null
null
2407.08250
null
null
http://arxiv.org/pdf/2407.08250v1
2024-07-11T07:52:33Z
2024-07-11T07:52:33Z
Gradient Boosting Reinforcement Learning
Neural networks (NN) achieve remarkable results in various tasks, but lack key characteristics: interpretability, support for categorical features, and lightweight implementations suitable for edge devices. While ongoing efforts aim to address these challenges, Gradient Boosting Trees (GBT) inherently meet these requirements. As a result, GBTs have become the go-to method for supervised learning tasks in many real-world applications and competitions. However, their application in online learning scenarios, notably in reinforcement learning (RL), has been limited. In this work, we bridge this gap by introducing Gradient-Boosting RL (GBRL), a framework that extends the advantages of GBT to the RL domain. Using the GBRL framework, we implement various actor-critic algorithms and compare their performance with their NN counterparts. Inspired by shared backbones in NN we introduce a tree-sharing approach for policy and value functions with distinct learning rates, enhancing learning efficiency over millions of interactions. GBRL achieves competitive performance across a diverse array of tasks, excelling in domains with structured or categorical features. Additionally, we present a high-performance, GPU-accelerated implementation that integrates seamlessly with widely-used RL libraries (available at https://github.com/NVlabs/gbrl). GBRL expands the toolkit for RL practitioners, demonstrating the viability and promise of GBT within the RL paradigm, particularly in domains characterized by structured or categorical features.
[ "['Benjamin Fuhrer' 'Chen Tessler' 'Gal Dalal']" ]
null
null
2407.08255
null
null
http://arxiv.org/pdf/2407.08255v1
2024-07-11T07:56:08Z
2024-07-11T07:56:08Z
GraphMamba: An Efficient Graph Structure Learning Vision Mamba for Hyperspectral Image Classification
Efficient extraction of spectral sequences and geospatial information has always been a hot topic in hyperspectral image classification. In terms of spectral sequence feature capture, RNN and Transformer have become mainstream classification frameworks due to their long-range feature capture capabilities. In terms of spatial information aggregation, CNN enhances the receptive field to retain integrated spatial information as much as possible. However, the spectral feature-capturing architectures exhibit low computational efficiency, and CNNs lack the flexibility to perceive spatial contextual information. To address these issues, this paper proposes GraphMamba--an efficient graph structure learning vision Mamba classification framework that fully considers HSI characteristics to achieve deep spatial-spectral information mining. Specifically, we propose a novel hyperspectral visual GraphMamba processing paradigm (HVGM) that preserves spatial-spectral features by constructing spatial-spectral cubes and utilizes linear spectral encoding to enhance the operability of subsequent tasks. The core components of GraphMamba include the HyperMamba module for improving computational efficiency and the SpectralGCN module for adaptive spatial context awareness. The HyperMamba mitigates clutter interference by employing the global mask (GM) and introduces a parallel training inference architecture to alleviate computational bottlenecks. The SpatialGCN incorporates weighted multi-hop aggregation (WMA) spatial encoding to focus on highly correlated spatial structural features, thus flexibly aggregating contextual information while mitigating spatial noise interference. Extensive experiments were conducted on three different scales of real HSI datasets, and compared with the state-of-the-art classification frameworks, GraphMamba achieved optimal performance.
[ "['Aitao Yang' 'Min Li' 'Yao Ding' 'Leyuan Fang' 'Yaoming Cai' 'Yujie He']" ]
null
null
2407.08256
null
null
http://arxiv.org/pdf/2407.08256v1
2024-07-11T07:56:17Z
2024-07-11T07:56:17Z
Adaptive Compressed Sensing with Diffusion-Based Posterior Sampling
Compressed Sensing (CS) facilitates rapid image acquisition by selecting a small subset of measurements sufficient for high-fidelity reconstruction. Adaptive CS seeks to further enhance this process by dynamically choosing future measurements based on information gleaned from data that is already acquired. However, many existing frameworks are often tailored to specific tasks and require intricate training procedures. We propose AdaSense, a novel Adaptive CS approach that leverages zero-shot posterior sampling with pre-trained diffusion models. By sequentially sampling from the posterior distribution, we can quantify the uncertainty of each possible future linear measurement throughout the acquisition process. AdaSense eliminates the need for additional training and boasts seamless adaptation to diverse domains with minimal tuning requirements. Our experiments demonstrate the effectiveness of AdaSense in reconstructing facial images from a small number of measurements. Furthermore, we apply AdaSense for active acquisition of medical images in the domains of magnetic resonance imaging (MRI) and computed tomography (CT), highlighting its potential for tangible real-world acceleration.
[ "['Noam Elata' 'Tomer Michaeli' 'Michael Elad']" ]
null
null
2407.08257
null
null
http://arxiv.org/pdf/2407.08257v1
2024-07-11T07:57:33Z
2024-07-11T07:57:33Z
Knowledge distillation to effectively attain both region-of-interest and global semantics from an image where multiple objects appear
Models based on convolutional neural networks (CNN) and transformers have steadily been improved. They also have been applied in various computer vision downstream tasks. However, in object detection tasks, accurately localizing and classifying almost infinite categories of foods in images remains challenging. To address these problems, we first segmented the food as the region-of-interest (ROI) by using the segment-anything model (SAM) and masked the rest of the region except ROI as black pixels. This process simplified the problems into a single classification for which annotation and training were much simpler than object detection. The images in which only the ROI was preserved were fed as inputs to fine-tune various off-the-shelf models that encoded their own inductive biases. Among them, Data-efficient image Transformers (DeiTs) had the best classification performance. Nonetheless, when foods' shapes and textures were similar, the contextual features of the ROI-only images were not enough for accurate classification. Therefore, we introduced a novel type of combined architecture, RveRNet, which consisted of ROI, extra-ROI, and integration modules that allowed it to account for both the ROI's and global contexts. The RveRNet's F1 score was 10% better than other individual models when classifying ambiguous food images. If the RveRNet's modules were DeiT with the knowledge distillation from the CNN, performed the best. We investigated how architectures can be made robust against input noise caused by permutation and translocation. The results indicated that there was a trade-off between how much the CNN teacher's knowledge could be distilled to DeiT and DeiT's innate strength. Code is publicly available at: https://github.com/Seonwhee-Genome/RveRNet.
[ "['Seonwhee Jin']" ]
null
null
2407.08270
null
null
http://arxiv.org/pdf/2407.08270v1
2024-07-11T08:12:46Z
2024-07-11T08:12:46Z
SciQu: Accelerating Materials Properties Prediction with Automated Literature Mining for Self-Driving Laboratories
Assessing different material properties to predict specific attributes, such as band gap, resistivity, young modulus, work function, and refractive index, is a fundamental requirement for materials science-based applications. However, the process is time-consuming and often requires extensive literature reviews and numerous experiments. Our study addresses these challenges by leveraging machine learning to analyze material properties with greater precision and efficiency. By automating the data extraction process and using the extracted information to train machine learning models, our developed model, SciQu, optimizes material properties. As a proof of concept, we predicted the refractive index of materials using data extracted from numerous research articles with SciQu, considering input descriptors such as space group, volume, and bandgap with Root Mean Square Error (RMSE) 0.068 and R2 0.94. Thus, SciQu not only predicts the properties of materials but also plays a key role in self-driving laboratories by optimizing the synthesis parameters to achieve precise shape, size, and phase of the materials subjected to the input parameters.
[ "['Anand Babu']" ]
null
null
2407.08271
null
null
http://arxiv.org/pdf/2407.08271v1
2024-07-11T08:15:57Z
2024-07-11T08:15:57Z
Gaussian process interpolation with conformal prediction: methods and comparative analysis
This article advocates the use of conformal prediction (CP) methods for Gaussian process (GP) interpolation to enhance the calibration of prediction intervals. We begin by illustrating that using a GP model with parameters selected by maximum likelihood often results in predictions that are not optimally calibrated. CP methods can adjust the prediction intervals, leading to better uncertainty quantification while maintaining the accuracy of the underlying GP model. We compare different CP variants and introduce a novel variant based on an asymmetric score. Our numerical experiments demonstrate the effectiveness of CP methods in improving calibration without compromising accuracy. This work aims to facilitate the adoption of CP methods in the GP community.
[ "['Aurélien Pion' 'Emmanuel Vazquez']" ]
null
null
2407.08274
null
null
http://arxiv.org/pdf/2407.08274v1
2024-07-11T08:23:46Z
2024-07-11T08:23:46Z
Explainability of Sub-Field Level Crop Yield Prediction using Remote Sensing
Crop yield forecasting plays a significant role in addressing growing concerns about food security and guiding decision-making for policymakers and farmers. When deep learning is employed, understanding the learning and decision-making processes of the models, as well as their interaction with the input data, is crucial for establishing trust in the models and gaining insight into their reliability. In this study, we focus on the task of crop yield prediction, specifically for soybean, wheat, and rapeseed crops in Argentina, Uruguay, and Germany. Our goal is to develop and explain predictive models for these crops, using a large dataset of satellite images, additional data modalities, and crop yield maps. We employ a long short-term memory network and investigate the impact of using different temporal samplings of the satellite data and the benefit of adding more relevant modalities. For model explainability, we utilize feature attribution methods to quantify input feature contributions, identify critical growth stages, analyze yield variability at the field level, and explain less accurate predictions. The modeling results show an improvement when adding more modalities or using all available instances of satellite data. The explainability results reveal distinct feature importance patterns for each crop and region. We further found that the most influential growth stages on the prediction are dependent on the temporal sampling of the input data. We demonstrated how these critical growth stages, which hold significant agronomic value, closely align with the existing literature in agronomy and crop development biology.
[ "['Hiba Najjar' 'Miro Miranda' 'Marlon Nuske' 'Ribana Roscher'\n 'Andreas Dengel']" ]
null
null
2407.08282
null
null
http://arxiv.org/pdf/2407.08282v1
2024-07-11T08:30:04Z
2024-07-11T08:30:04Z
AoA-Based Physical Layer Authentication in Analog Arrays under Impersonation Attacks
We discuss the use of angle of arrival (AoA) as an authentication measure in analog array multiple-input multiple-output (MIMO) systems. A base station equipped with an analog array authenticates users based on the AoA estimated from certified pilot transmissions, while active attackers manipulate their transmitted signals to mount impersonation attacks. We study several attacks of increasing intensity (captured through the availability of side information at the attackers) and assess the performance of AoA-based authentication using one-class classifiers. Our results show that some attack techniques with knowledge of the combiners at the verifier are effective in falsifying the AoA and compromising the security of the considered type of physical layer authentication.
[ "['Muralikrishnan Srinivasan' 'Linda Senigagliesi' 'Hui Chen'\n 'Arsenia Chorti' 'Marco Baldi' 'Henk Wymeersch']" ]
null
null
2407.08289
null
null
http://arxiv.org/pdf/2407.08289v1
2024-07-11T08:33:42Z
2024-07-11T08:33:42Z
Predicting Heart Failure with Attention Learning Techniques Utilizing Cardiovascular Data
Cardiovascular diseases (CVDs) encompass a group of disorders affecting the heart and blood vessels, including conditions such as coronary artery disease, heart failure, stroke, and hypertension. In cardiovascular diseases, heart failure is one of the main causes of death and also long-term suffering in patients worldwide. Prediction is one of the risk factors that is highly valuable for treatment and intervention to minimize heart failure. In this work, an attention learning-based heart failure prediction approach is proposed on EHR(electronic health record) cardiovascular data such as ejection fraction and serum creatinine. Moreover, different optimizers with various learning rate approaches are applied to fine-tune the proposed approach. Serum creatinine and ejection fraction are the two most important features to predict the patient's heart failure. The computational result shows that the RMSProp optimizer with 0.001 learning rate has a better prediction based on serum creatinine. On the other hand, the combination of SGD optimizer with 0.01 learning rate exhibits optimum performance based on ejection fraction features. Overall, the proposed attention learning-based approach performs very efficiently in predicting heart failure compared to the existing state-of-the-art such as LSTM approach.
[ "['Ershadul Haque' 'Manoranjan Paul' 'Faranak Tohidi']" ]
null
null
2407.08296
null
null
http://arxiv.org/pdf/2407.08296v1
2024-07-11T08:42:58Z
2024-07-11T08:42:58Z
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients
Training Large Language Models (LLMs) is memory-intensive due to the large number of parameters and associated optimization states. GaLore, a recent method, reduces memory usage by projecting weight gradients into a low-rank subspace without compromising performance. However, GaLore relies on time-consuming Singular Value Decomposition (SVD) operations to identify the subspace, and the frequent subspace updates lead to significant training time overhead. Moreover, GaLore offers minimal improvements in accuracy and efficiency compared to LoRA in more accessible fine-tuning scenarios. To address these limitations, we introduce Q-Galore, a novel approach that substantially reduces memory usage by combining quantization and low-rank projection, surpassing the benefits of GaLore. Our method is based on two key observations: (i) the gradient subspace exhibits diverse properties, with some layers converging early in training while others are subject to frequent changes; (ii) the projection matrices are highly resilient to low-bit quantization. Leveraging these insights, Q-GaLore adaptively updates the gradient subspace based on its convergence statistics, achieving comparable performance while significantly reducing the number of SVD operations. We maintain the projection matrices in INT4 format and weights in INT8 format, incorporating stochastic rounding to capture accumulated gradient information. This approach enables a high-precision training trajectory using only low-precision weights. We demonstrate that Q-GaLore achieves highly competitive performance with exceptional memory efficiency. At pre-training, Q-GaLore facilitates training a LLaMA-7B model from scratch on a single NVIDIA RTX 4060 Ti with only 16 GB memory. At fine-tuning, it reduces memory consumption by up to 50% compared to LoRA and GaLore, while consistently outperforming QLoRA at the same memory cost.
[ "['Zhenyu Zhang' 'Ajay Jaiswal' 'Lu Yin' 'Shiwei Liu' 'Jiawei Zhao'\n 'Yuandong Tian' 'Zhangyang Wang']" ]
null
null
2407.08298
null
null
http://arxiv.org/pdf/2407.08298v1
2024-07-11T08:44:43Z
2024-07-11T08:44:43Z
XAI-Guided Enhancement of Vegetation Indices for Crop Mapping
Vegetation indices allow to efficiently monitor vegetation growth and agricultural activities. Previous generations of satellites were capturing a limited number of spectral bands, and a few expert-designed vegetation indices were sufficient to harness their potential. New generations of multi- and hyperspectral satellites can however capture additional bands, but are not yet efficiently exploited. In this work, we propose an explainable-AI-based method to select and design suitable vegetation indices. We first train a deep neural network using multispectral satellite data, then extract feature importance to identify the most influential bands. We subsequently select suitable existing vegetation indices or modify them to incorporate the identified bands and retrain our model. We validate our approach on a crop classification task. Our results indicate that models trained on individual indices achieve comparable results to the baseline model trained on all bands, while the combination of two indices surpasses the baseline in certain cases.
[ "['Hiba Najjar' 'Francisco Mena' 'Marlon Nuske' 'Andreas Dengel']" ]
null
null
2407.08313
null
null
http://arxiv.org/pdf/2407.08313v1
2024-07-11T09:04:12Z
2024-07-11T09:04:12Z
Improving Molecular Modeling with Geometric GNNs: an Empirical Study
Rapid advancements in machine learning (ML) are transforming materials science by significantly speeding up material property calculations. However, the proliferation of ML approaches has made it challenging for scientists to keep up with the most promising techniques. This paper presents an empirical study on Geometric Graph Neural Networks for 3D atomic systems, focusing on the impact of different (1) canonicalization methods, (2) graph creation strategies, and (3) auxiliary tasks, on performance, scalability and symmetry enforcement. Our findings and insights aim to guide researchers in selecting optimal modeling components for molecular modeling tasks.
[ "['Ali Ramlaoui' 'Théo Saulus' 'Basile Terver' 'Victor Schmidt'\n 'David Rolnick' 'Fragkiskos D. Malliaros' 'Alexandre Duval']" ]
null
null
2407.08316
null
null
http://arxiv.org/pdf/2407.08316v1
2024-07-11T09:07:22Z
2024-07-11T09:07:22Z
Enhancing ADHD Diagnosis with EEG: The Critical Role of Preprocessing and Key Features
Background: Attention-Deficit/Hyperactivity Disorder (ADHD) is a prevalent neurodevelopmental disorder that significantly impacts various key aspects of life, requiring accurate diagnostic methods. Electroencephalogram (EEG) signals are used in diagnosing ADHD, but proper preprocessing is crucial to avoid noise and artifacts that could lead to unreliable results. Method: This study utilized a public EEG dataset from children diagnosed with ADHD and typically developing (TD) children. Four preprocessing techniques were applied: no preprocessing (Raw), Finite Impulse Response (FIR) filtering, Artifact Subspace Reconstruction (ASR), and Independent Component Analysis (ICA). EEG recordings were segmented, and features were extracted and selected based on statistical significance. Classification was performed using Machine Learning models, as XGBoost, Support Vector Machine, and K-Nearest Neighbors. Results: The absence of preprocessing leads to artificially high classification accuracy due to noise. In contrast, ASR and ICA preprocessing techniques significantly improved the reliability of results. Segmenting EEG recordings revealed that later segments provided better classification accuracy, likely due to the manifestation of ADHD symptoms over time. The most relevant EEG channels were P3, P4, and C3. The top features for classification included Kurtosis, Katz fractal dimension, and power spectral density of Delta, Theta, and Alpha bands. Conclusions: Effective preprocessing is essential in EEG-based ADHD diagnosis to prevent noise-induced biases. This study identifies crucial EEG channels and features, providing a foundation for further research and improving ADHD diagnostic accuracy. Future work should focus on expanding datasets, refining preprocessing methods, and enhancing feature interpretability to improve diagnostic accuracy and model robustness for clinical use.
[ "['Sandra García-Ponsoda' 'Alejandro Maté' 'Juan Trujillo']" ]
null
null
2407.08324
null
null
http://arxiv.org/pdf/2407.08324v1
2024-07-11T09:13:11Z
2024-07-11T09:13:11Z
A Cantor-Kantorovich Metric Between Markov Decision Processes with Application to Transfer Learning
We extend the notion of Cantor-Kantorovich distance between Markov chains introduced by (Banse et al., 2023) in the context of Markov Decision Processes (MDPs). The proposed metric is well-defined and can be efficiently approximated given a finite horizon. Then, we provide numerical evidences that the latter metric can lead to interesting applications in the field of reinforcement learning. In particular, we show that it could be used for forecasting the performance of transfer learning algorithms.
[ "['Adrien Banse' 'Venkatraman Renganathan' 'Raphaël M. Jungers']" ]
null
null
2407.08328
null
null
http://arxiv.org/pdf/2407.08328v1
2024-07-11T09:26:05Z
2024-07-11T09:26:05Z
Unveiling Disparities in Maternity Care: A Topic Modelling Approach to Analysing Maternity Incident Investigation Reports
This study applies Natural Language Processing techniques, including Latent Dirichlet Allocation, to analyse anonymised maternity incident investigation reports from the Healthcare Safety Investigation Branch. The reports underwent preprocessing, annotation using the Safety Intelligence Research taxonomy, and topic modelling to uncover prevalent topics and detect differences in maternity care across ethnic groups. A combination of offline and online methods was utilised to ensure data protection whilst enabling advanced analysis, with offline processing for sensitive data and online processing for non-sensitive data using the `Claude 3 Opus' language model. Interactive topic analysis and semantic network visualisation were employed to extract and display thematic topics and visualise semantic relationships among keywords. The analysis revealed disparities in care among different ethnic groups, with distinct focus areas for the Black, Asian, and White British ethnic groups. The study demonstrates the effectiveness of topic modelling and NLP techniques in analysing maternity incident investigation reports and highlighting disparities in care. The findings emphasise the crucial role of advanced data analysis in improving maternity care quality and equity.
[ "['Georgina Cosma' 'Mohit Kumar Singh' 'Patrick Waterson'\n 'Gyuchan Thomas Jun' 'Jonathan Back']" ]
null
null
2407.08330
null
null
http://arxiv.org/pdf/2407.08330v1
2024-07-11T09:28:04Z
2024-07-11T09:28:04Z
HDT: Hierarchical Document Transformer
In this paper, we propose the Hierarchical Document Transformer (HDT), a novel sparse Transformer architecture tailored for structured hierarchical documents. Such documents are extremely important in numerous domains, including science, law or medicine. However, most existing solutions are inefficient and fail to make use of the structure inherent to documents. HDT exploits document structure by introducing auxiliary anchor tokens and redesigning the attention mechanism into a sparse multi-level hierarchy. This approach facilitates information exchange between tokens at different levels while maintaining sparsity, thereby enhancing computational and memory efficiency while exploiting the document structure as an inductive bias. We address the technical challenge of implementing HDT's sample-dependent hierarchical attention pattern by developing a novel sparse attention kernel that considers the hierarchical structure of documents. As demonstrated by our experiments, utilizing structural information present in documents leads to faster convergence, higher sample efficiency and better performance on downstream tasks.
[ "['Haoyu He' 'Markus Flicke' 'Jan Buchmann' 'Iryna Gurevych'\n 'Andreas Geiger']" ]
null
null
2407.08337
null
null
http://arxiv.org/pdf/2407.08337v1
2024-07-11T09:40:29Z
2024-07-11T09:40:29Z
FedLog: Personalized Federated Classification with Less Communication and More Flexibility
In federated learning (FL), the common paradigm that FedAvg proposes and most algorithms follow is that clients train local models with their private data, and the model parameters are shared for central aggregation, mostly averaging. In this paradigm, the communication cost is often a challenge, as modern massive neural networks can contain millions to billions parameters. We suggest that clients do not share model parameters but local data summaries, to decrease the cost of sharing. We develop a new algorithm FedLog with Bayesian inference, which shares only sufficient statistics of local data. FedLog transmits messages as small as the last layer of the original model. We conducted comprehensive experiments to show we outperform other FL algorithms that aim at decreasing the communication cost. To provide formal privacy guarantees, we further extend FedLog with differential privacy and show the trade-off between privacy budget and accuracy.
[ "['Haolin Yu' 'Guojun Zhang' 'Pascal Poupart']" ]
null
null
2407.08340
null
null
http://arxiv.org/pdf/2407.08340v1
2024-07-11T09:43:57Z
2024-07-11T09:43:57Z
SLRL: Structured Latent Representation Learning for Multi-view Clustering
In recent years, Multi-View Clustering (MVC) has attracted increasing attention for its potential to reduce the annotation burden associated with large datasets. The aim of MVC is to exploit the inherent consistency and complementarity among different views, thereby integrating information from multiple perspectives to improve clustering outcomes. Despite extensive research in MVC, most existing methods focus predominantly on harnessing complementary information across views to enhance clustering effectiveness, often neglecting the structural information among samples, which is crucial for exploring sample correlations. To address this gap, we introduce a novel framework, termed Structured Latent Representation Learning based Multi-View Clustering method (SLRL). SLRL leverages both the complementary and structural information. Initially, it learns a common latent representation for all views. Subsequently, to exploit the structural information among samples, a k-nearest neighbor graph is constructed from this common latent representation. This graph facilitates enhanced sample interaction through graph learning techniques, leading to a structured latent representation optimized for clustering. Extensive experiments demonstrate that SLRL not only competes well with existing methods but also sets new benchmarks in various multi-view datasets.
[ "['Zhangci Xiong' 'Meng Cao']" ]
null
null
2407.08348
null
null
http://arxiv.org/pdf/2407.08348v1
2024-07-11T09:56:51Z
2024-07-11T09:56:51Z
Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models -- The Story Goes On
In this paper, we investigate the underlying factors that potentially enhance the mathematical reasoning capabilities of large language models (LLMs). We argue that the data scaling law for math reasoning capabilities in modern LLMs is far from being saturated, highlighting how the model's quality improves with increases in data quantity. To support this claim, we introduce the Skywork-Math model series, supervised fine-tuned (SFT) on common 7B LLMs using our proposed 2.5M-instance Skywork-MathQA dataset. Skywork-Math 7B has achieved impressive accuracies of 51.2% on the competition-level MATH benchmark and 83.9% on the GSM8K benchmark using only SFT data, outperforming an early version of GPT-4 on MATH. The superior performance of Skywork-Math models contributes to our novel two-stage data synthesis and model SFT pipelines, which include three different augmentation methods and a diverse seed problem set, ensuring both the quantity and quality of Skywork-MathQA dataset across varying difficulty levels. Most importantly, we provide several practical takeaways to enhance math reasoning abilities in LLMs for both research and industry applications.
[ "['Liang Zeng' 'Liangjun Zhong' 'Liang Zhao' 'Tianwen Wei' 'Liu Yang'\n 'Jujie He' 'Cheng Cheng' 'Rui Hu' 'Yang Liu' 'Shuicheng Yan' 'Han Fang'\n 'Yahui Zhou']" ]
null
null
2407.08351
null
null
http://arxiv.org/pdf/2407.08351v1
2024-07-11T10:03:47Z
2024-07-11T10:03:47Z
AutoBencher: Creating Salient, Novel, Difficult Datasets for Language Models
Evaluation is critical for assessing capabilities, tracking scientific progress, and informing model selection. In this paper, we present three desiderata for a good benchmark for language models: (i) salience (e.g., knowledge about World War II is more salient than a random day in history), (ii) novelty (i.e., the benchmark reveals new trends in model rankings not shown by previous benchmarks), and (iii) difficulty (i.e., the benchmark should be difficult for existing models, leaving headroom for future improvement). We operationalize these three desiderata and cast benchmark creation as a search problem, that of finding benchmarks that that satisfy all three desiderata. To tackle this search problem, we present AutoBencher, which uses a language model to automatically search for datasets that meet the three desiderata. AutoBencher uses privileged information (e.g. relevant documents) to construct reliable datasets, and adaptivity with reranking to optimize for the search objective. We use AutoBencher to create datasets for math, multilingual, and knowledge-intensive question answering. The scalability of AutoBencher allows it to test fine-grained categories and tail knowledge, creating datasets that are on average 27% more novel and 22% more difficult than existing benchmarks. A closer investigation of our constructed datasets shows that we can identify specific gaps in LM knowledge in language models that are not captured by existing benchmarks, such as Gemini Pro performing much worse on question answering about the Permian Extinction and Fordism, while OpenAGI-7B performing surprisingly well on QA about COVID-19.
[ "['Xiang Lisa Li' 'Evan Zheran Liu' 'Percy Liang' 'Tatsunori Hashimoto']" ]
null
null
2407.08362
null
null
http://arxiv.org/pdf/2407.08362v1
2024-07-11T10:15:52Z
2024-07-11T10:15:52Z
STAL: Spike Threshold Adaptive Learning Encoder for Classification of Pain-Related Biosignal Data
This paper presents the first application of spiking neural networks (SNNs) for the classification of chronic lower back pain (CLBP) using the EmoPain dataset. Our work has two main contributions. We introduce Spike Threshold Adaptive Learning (STAL), a trainable encoder that effectively converts continuous biosignals into spike trains. Additionally, we propose an ensemble of Spiking Recurrent Neural Network (SRNN) classifiers for the multi-stream processing of sEMG and IMU data. To tackle the challenges of small sample size and class imbalance, we implement minority over-sampling with weighted sample replacement during batch creation. Our method achieves outstanding performance with an accuracy of 80.43%, AUC of 67.90%, F1 score of 52.60%, and Matthews Correlation Coefficient (MCC) of 0.437, surpassing traditional rate-based and latency-based encoding methods. The STAL encoder shows superior performance in preserving temporal dynamics and adapting to signal characteristics. Importantly, our approach (STAL-SRNN) outperforms the best deep learning method in terms of MCC, indicating better balanced class prediction. This research contributes to the development of neuromorphic computing for biosignal analysis. It holds promise for energy-efficient, wearable solutions in chronic pain management.
[ "['Freek Hens' 'Mohammad Mahdi Dehshibi' 'Leila Bagheriye'\n 'Mahyar Shahsavari' 'Ana Tajadura-Jiménez']" ]
null
null
2407.08364
null
null
http://arxiv.org/pdf/2407.08364v1
2024-07-11T10:18:54Z
2024-07-11T10:18:54Z
Scalar Function Topology Divergence: Comparing Topology of 3D Objects
We propose a new topological tool for computer vision - Scalar Function Topology Divergence (SFTD), which measures the dissimilarity of multi-scale topology between sublevel sets of two functions having a common domain. Functions can be defined on an undirected graph or Euclidean space of any dimensionality. Most of the existing methods for comparing topology are based on Wasserstein distance between persistence barcodes and they don't take into account the localization of topological features. On the other hand, the minimization of SFTD ensures that the corresponding topological features of scalar functions are located in the same places. The proposed tool provides useful visualizations depicting areas where functions have topological dissimilarities. We provide applications of the proposed method to 3D computer vision. In particular, experiments demonstrate that SFTD improves the reconstruction of cellular 3D shapes from 2D fluorescence microscopy images, and helps to identify topological errors in 3D segmentation.
[ "['Ilya Trofimov' 'Daria Voronkova' 'Eduard Tulchinskii' 'Evgeny Burnaev'\n 'Serguei Barannikov']" ]
null
null
2407.08415
null
null
http://arxiv.org/pdf/2407.08415v1
2024-07-11T11:41:29Z
2024-07-11T11:41:29Z
Parallelizing Autoregressive Generation with Variational State Space Models
Attention-based models such as Transformers and recurrent models like state space models (SSMs) have emerged as successful methods for autoregressive sequence modeling. Although both enable parallel training, none enable parallel generation due to their autoregressiveness. We propose the variational SSM (VSSM), a variational autoencoder (VAE) where both the encoder and decoder are SSMs. Since sampling the latent variables and decoding them with the SSM can be parallelized, both training and generation can be conducted in parallel. Moreover, the decoder recurrence allows generation to be resumed without reprocessing the whole sequence. Finally, we propose the autoregressive VSSM that can be conditioned on a partial realization of the sequence, as is common in language generation tasks. Interestingly, the autoregressive VSSM still enables parallel generation. We highlight on toy problems (MNIST, CIFAR) the empirical gains in speed-up and show that it competes with traditional models in terms of generation quality (Transformer, Mamba SSM).
[ "['Gaspard Lambrechts' 'Yann Claes' 'Pierre Geurts' 'Damien Ernst']" ]
null
null
2407.08417
null
null
http://arxiv.org/pdf/2407.08417v1
2024-07-11T11:47:43Z
2024-07-11T11:47:43Z
Unveiling the Potential of BERTopic for Multilingual Fake News Analysis -- Use Case: Covid-19
Topic modeling is frequently being used for analysing large text corpora such as news articles or social media data. BERTopic, consisting of sentence embedding, dimension reduction, clustering, and topic extraction, is the newest and currently the SOTA topic modeling method. However, current topic modeling methods have room for improvement because, as unsupervised methods, they require careful tuning and selection of hyperparameters, e.g., for dimension reduction and clustering. This paper aims to analyse the technical application of BERTopic in practice. For this purpose, it compares and selects different methods and hyperparameters for each stage of BERTopic through density based clustering validation and six different topic coherence measures. Moreover, it also aims to analyse the results of topic modeling on real world data as a use case. For this purpose, the German fake news dataset (GermanFakeNCovid) on Covid-19 was created by us and in order to experiment with topic modeling in a multilingual (English and German) setting combined with the FakeCovid dataset. With the final results, we were able to determine thematic similarities between the United States and Germany. Whereas, distinguishing the topics of fake news from India proved to be more challenging.
[ "['Karla Schäfer' 'Jeong-Eun Choi' 'Inna Vogel' 'Martin Steinebach']" ]
null
null
2407.08418
null
null
http://arxiv.org/pdf/2407.08418v2
2024-07-12T02:55:16Z
2024-07-11T11:51:36Z
PredBench: Benchmarking Spatio-Temporal Prediction across Diverse Disciplines
In this paper, we introduce PredBench, a benchmark tailored for the holistic evaluation of spatio-temporal prediction networks. Despite significant progress in this field, there remains a lack of a standardized framework for a detailed and comparative analysis of various prediction network architectures. PredBench addresses this gap by conducting large-scale experiments, upholding standardized and appropriate experimental settings, and implementing multi-dimensional evaluations. This benchmark integrates 12 widely adopted methods with 15 diverse datasets across multiple application domains, offering extensive evaluation of contemporary spatio-temporal prediction networks. Through meticulous calibration of prediction settings across various applications, PredBench ensures evaluations relevant to their intended use and enables fair comparisons. Moreover, its multi-dimensional evaluation framework broadens the analysis with a comprehensive set of metrics, providing deep insights into the capabilities of models. The findings from our research offer strategic directions for future developments in the field. Our codebase is available at https://github.com/OpenEarthLab/PredBench.
[ "['ZiDong Wang' 'Zeyu Lu' 'Di Huang' 'Tong He' 'Xihui Liu' 'Wanli Ouyang'\n 'Lei Bai']" ]
null
null
2407.08432
null
null
http://arxiv.org/pdf/2407.08432v1
2024-07-11T12:12:55Z
2024-07-11T12:12:55Z
Subgroup-Specific Risk-Controlled Dose Estimation in Radiotherapy
Cancer remains a leading cause of death, highlighting the importance of effective radiotherapy (RT). Magnetic resonance-guided linear accelerators (MR-Linacs) enable imaging during RT, allowing for inter-fraction, and perhaps even intra-fraction, adjustments of treatment plans. However, achieving this requires fast and accurate dose calculations. While Monte Carlo simulations offer accuracy, they are computationally intensive. Deep learning frameworks show promise, yet lack uncertainty quantification crucial for high-risk applications like RT. Risk-controlling prediction sets (RCPS) offer model-agnostic uncertainty quantification with mathematical guarantees. However, we show that naive application of RCPS may lead to only certain subgroups such as the image background being risk-controlled. In this work, we extend RCPS to provide prediction intervals with coverage guarantees for multiple subgroups with unknown subgroup membership at test time. We evaluate our algorithm on real clinical planing volumes from five different anatomical regions and show that our novel subgroup RCPS (SG-RCPS) algorithm leads to prediction intervals that jointly control the risk for multiple subgroups. In particular, our method controls the risk of the crucial voxels along the radiation beam significantly better than conventional RCPS.
[ "['Paul Fischer' 'Hannah Willms' 'Moritz Schneider' 'Daniela Thorwarth'\n 'Michael Muehlebach' 'Christian F. Baumgartner']" ]
null
null
2407.08434
null
null
http://arxiv.org/pdf/2407.08434v1
2024-07-11T12:17:31Z
2024-07-11T12:17:31Z
Improve Load Forecasting in Energy Communities through Transfer Learning using Open-Access Synthetic Profiles
According to a conservative estimate, a 1% reduction in forecast error for a 10 GW energy utility can save up to $ 1.6 million annually. In our context, achieving precise forecasts of future power consumption is crucial for operating flexible energy assets using model predictive control approaches. Specifically, this work focuses on the load profile forecast of a first-year energy community with the common practical challenge of limited historical data availability. We propose to pre-train the load prediction models with open-access synthetic load profiles using transfer learning techniques to tackle this challenge. Results show that this approach improves both, the training stability and prediction error. In a test case with 74 households, the prediction mean squared error (MSE) decreased from 0.34 to 0.13, showing transfer learning based on synthetic load profiles to be a viable approach to compensate for a lack of historic data.
[ "['Lukas Moosbrugger' 'Valentin Seiler' 'Gerhard Huber' 'Peter Kepplinger']" ]
null
null
2407.08442
null
null
http://arxiv.org/pdf/2407.08442v1
2024-07-11T12:33:28Z
2024-07-11T12:33:28Z
How Deep is your Guess? A Fresh Perspective on Deep Learning for Medical Time-Series Imputation
We introduce a novel classification framework for time-series imputation using deep learning, with a particular focus on clinical data. By identifying conceptual gaps in the literature and existing reviews, we devise a taxonomy grounded on the inductive bias of neural imputation frameworks, resulting in a classification of existing deep imputation strategies based on their suitability for specific imputation scenarios and data-specific properties. Our review further examines the existing methodologies employed to benchmark deep imputation models, evaluating their effectiveness in capturing the missingness scenarios found in clinical data and emphasising the importance of reconciling mathematical abstraction with clinical insights. Our classification aims to serve as a guide for researchers to facilitate the selection of appropriate deep learning imputation techniques tailored to their specific clinical data. Our novel perspective also highlights the significance of bridging the gap between computational methodologies and medical insights to achieve clinically sound imputation models.
[ "['Linglong Qian' 'Tao Wang' 'Jun Wang' 'Hugh Logan Ellis' 'Robin Mitra'\n 'Richard Dobson' 'Zina Ibrahim']" ]
null
null
2407.08458
null
null
http://arxiv.org/pdf/2407.08458v1
2024-07-11T12:54:38Z
2024-07-11T12:54:38Z
Joint Optimization of Age of Information and Energy Consumption in NR-V2X System based on Deep Reinforcement Learning
Autonomous driving may be the most important application scenario of next generation, the development of wireless access technologies enabling reliable and low-latency vehicle communication becomes crucial. To address this, 3GPP has developed Vehicle-to-Everything (V2X) specifications based on 5G New Radio (NR) technology, where Mode 2 Side-Link (SL) communication resembles Mode 4 in LTE-V2X, allowing direct communication between vehicles. This supplements SL communication in LTE-V2X and represents the latest advancement in cellular V2X (C-V2X) with improved performance of NR-V2X. However, in NR-V2X Mode 2, resource collisions still occur, and thus degrade the age of information (AOI). Therefore, a interference cancellation method is employed to mitigate this impact by combining NR-V2X with Non-Orthogonal multiple access (NOMA) technology. In NR-V2X, when vehicles select smaller resource reservation interval (RRI), higher-frequency transmissions take ore energy to reduce AoI. Hence, it is important to jointly consider AoI and communication energy consumption based on NR-V2X communication. Then, we formulate such an optimization problem and employ the Deep Reinforcement Learning (DRL) algorithm to compute the optimal transmission RRI and transmission power for each transmitting vehicle to reduce the energy consumption of each transmitting vehicle and the AoI of each receiving vehicle. Extensive simulations have demonstrated the performance of our proposed algorithm.
[ "['Shulin Song' 'Zheng Zhang' 'Qiong Wu' 'Qiang Fan' 'Pingyi Fan']" ]
null
null
2407.08459
null
null
http://arxiv.org/pdf/2407.08459v1
2024-07-11T12:58:07Z
2024-07-11T12:58:07Z
Graph Expansions of Deep Neural Networks and their Universal Scaling Limits
We present a unified approach to obtain scaling limits of neural networks using the genus expansion technique from random matrix theory. This approach begins with a novel expansion of neural networks which is reminiscent of Butcher series for ODEs, and is obtained through a generalisation of Fa`a di Bruno's formula to an arbitrary number of compositions. In this expansion, the role of monomials is played by random multilinear maps indexed by directed graphs whose edges correspond to random matrices, which we call operator graphs. This expansion linearises the effect of the activation functions, allowing for the direct application of Wick's principle to compute the expectation of each of its terms. We then determine the leading contribution to each term by embedding the corresponding graphs onto surfaces, and computing their Euler characteristic. Furthermore, by developing a correspondence between analytic and graphical operations, we obtain similar graph expansions for the neural tangent kernel as well as the input-output Jacobian of the original neural network, and derive their infinite-width limits with relative ease. Notably, we find explicit formulae for the moments of the limiting singular value distribution of the Jacobian. We then show that all of these results hold for networks with more general weights, such as general matrices with i.i.d. entries satisfying moment assumptions, complex matrices and sparse matrices.
[ "['Nicola Muca Cirone' 'Jad Hamdan' 'Cristopher Salvi']" ]
null
null
2407.08462
null
null
http://arxiv.org/pdf/2407.08462v1
2024-07-11T12:58:47Z
2024-07-11T12:58:47Z
Distributed Deep Reinforcement Learning Based Gradient Quantization for Federated Learning Enabled Vehicle Edge Computing
Federated Learning (FL) can protect the privacy of the vehicles in vehicle edge computing (VEC) to a certain extent through sharing the gradients of vehicles' local models instead of local data. The gradients of vehicles' local models are usually large for the vehicular artificial intelligence (AI) applications, thus transmitting such large gradients would cause large per-round latency. Gradient quantization has been proposed as one effective approach to reduce the per-round latency in FL enabled VEC through compressing gradients and reducing the number of bits, i.e., the quantization level, to transmit gradients. The selection of quantization level and thresholds determines the quantization error, which further affects the model accuracy and training time. To do so, the total training time and quantization error (QE) become two key metrics for the FL enabled VEC. It is critical to jointly optimize the total training time and QE for the FL enabled VEC. However, the time-varying channel condition causes more challenges to solve this problem. In this paper, we propose a distributed deep reinforcement learning (DRL)-based quantization level allocation scheme to optimize the long-term reward in terms of the total training time and QE. Extensive simulations identify the optimal weighted factors between the total training time and QE, and demonstrate the feasibility and effectiveness of the proposed scheme.
[ "['Cui Zhang' 'Wenjun Zhang' 'Qiong Wu' 'Pingyi Fan' 'Qiang Fan'\n 'Jiangzhou Wang' 'Khaled B. Letaief']" ]
null
null
2407.08464
null
null
http://arxiv.org/pdf/2407.08464v1
2024-07-11T13:01:18Z
2024-07-11T13:01:18Z
TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations
Unsupervised goal-conditioned reinforcement learning (GCRL) is a promising paradigm for developing diverse robotic skills without external supervision. However, existing unsupervised GCRL methods often struggle to cover a wide range of states in complex environments due to their limited exploration and sparse or noisy rewards for GCRL. To overcome these challenges, we propose a novel unsupervised GCRL method that leverages TemporaL Distance-aware Representations (TLDR). TLDR selects faraway goals to initiate exploration and computes intrinsic exploration rewards and goal-reaching rewards, based on temporal distance. Specifically, our exploration policy seeks states with large temporal distances (i.e. covering a large state space), while the goal-conditioned policy learns to minimize the temporal distance to the goal (i.e. reaching the goal). Our experimental results in six simulated robotic locomotion environments demonstrate that our method significantly outperforms previous unsupervised GCRL methods in achieving a wide variety of states.
[ "['Junik Bae' 'Kwanyoung Park' 'Youngwoon Lee']" ]
null
null
2407.08479
null
null
http://arxiv.org/pdf/2407.08479v1
2024-07-11T13:13:24Z
2024-07-11T13:13:24Z
Robust Generalization of Graph Neural Networks for Carrier Scheduling
Battery-free sensor tags are devices that leverage backscatter techniques to communicate with standard IoT devices, thereby augmenting a network's sensing capabilities in a scalable way. For communicating, a sensor tag relies on an unmodulated carrier provided by a neighboring IoT device, with a schedule coordinating this provisioning across the network. Carrier scheduling--computing schedules to interrogate all sensor tags while minimizing energy, spectrum utilization, and latency--is an NP-Hard optimization problem. Recent work introduces learning-based schedulers that achieve resource savings over a carefully-crafted heuristic, generalizing to networks of up to 60 nodes. However, we find that their advantage diminishes in networks with hundreds of nodes, and degrades further in larger setups. This paper introduces RobustGANTT, a GNN-based scheduler that improves generalization (without re-training) to networks up to 1000 nodes (100x training topology sizes). RobustGANTT not only achieves better and more consistent generalization, but also computes schedules requiring up to 2x less resources than existing systems. Our scheduler exhibits average runtimes of hundreds of milliseconds, allowing it to react fast to changing network conditions. Our work not only improves resource utilization in large-scale backscatter networks, but also offers valuable insights in learning-based scheduling.
[ "['Daniel F. Perez-Ramirez' 'Carlos Pérez-Penichet' 'Nicolas Tsiftes'\n 'Dejan Kostic' 'Magnus Boman' 'Thiemo Voigt']" ]
null
null
2407.08500
null
null
http://arxiv.org/pdf/2407.08500v1
2024-07-11T13:35:22Z
2024-07-11T13:35:22Z
Latent Conditional Diffusion-based Data Augmentation for Continuous-Time Dynamic Graph Mode
Continuous-Time Dynamic Graph (CTDG) precisely models evolving real-world relationships, drawing heightened interest in dynamic graph learning across academia and industry. However, existing CTDG models encounter challenges stemming from noise and limited historical data. Graph Data Augmentation (GDA) emerges as a critical solution, yet current approaches primarily focus on static graphs and struggle to effectively address the dynamics inherent in CTDGs. Moreover, these methods often demand substantial domain expertise for parameter tuning and lack theoretical guarantees for augmentation efficacy. To address these issues, we propose Conda, a novel latent diffusion-based GDA method tailored for CTDGs. Conda features a sandwich-like architecture, incorporating a Variational Auto-Encoder (VAE) and a conditional diffusion model, aimed at generating enhanced historical neighbor embeddings for target nodes. Unlike conventional diffusion models trained on entire graphs via pre-training, Conda requires historical neighbor sequence embeddings of target nodes for training, thus facilitating more targeted augmentation. We integrate Conda into the CTDG model and adopt an alternating training strategy to optimize performance. Extensive experimentation across six widely used real-world datasets showcases the consistent performance improvement of our approach, particularly in scenarios with limited historical data.
[ "['Yuxing Tian' 'Yiyan Qi' 'Aiwen Jiang' 'Qi Huang' 'Jian Guo']" ]
null
null
2407.08546
null
null
http://arxiv.org/pdf/2407.08546v1
2024-07-11T14:30:49Z
2024-07-11T14:30:49Z
Quantitative Evaluation of the Saliency Map for Alzheimer's Disease Classifier with Anatomical Segmentation
Saliency maps have been widely used to interpret deep learning classifiers for Alzheimer's disease (AD). However, since AD is heterogeneous and has multiple subtypes, the pathological mechanism of AD remains not fully understood and may vary from patient to patient. Due to the lack of such understanding, it is difficult to comprehensively and effectively assess the saliency map of AD classifier. In this paper, we utilize the anatomical segmentation to allocate saliency values into different brain regions. By plotting the distributions of saliency maps corresponding to AD and NC (Normal Control), we can gain a comprehensive view of the model's decisions process. In order to leverage the fact that the brain volume shrinkage happens in AD patients during disease progression, we define a new evaluation metric, brain volume change score (VCS), by computing the average Pearson correlation of the brain volume changes and the saliency values of a model in different brain regions for each patient. Thus, the VCS metric can help us gain some knowledge of how saliency maps resulting from different models relate to the changes of the volumes across different regions in the whole brain. We trained candidate models on the ADNI dataset and tested on three different datasets. Our results indicate: (i) models with higher VCSs tend to demonstrate saliency maps with more details relevant to the AD pathology, (ii) using gradient-based adversarial training strategies such as FGSM and stochastic masking can improve the VCSs of the models.
[ "['Yihan Zhang' 'Xuanshuo Zhang' 'Wei Wu' 'Haohan Wang']" ]
null
null
2407.08560
null
null
http://arxiv.org/pdf/2407.08560v1
2024-07-11T14:47:44Z
2024-07-11T14:47:44Z
Causal inference through multi-stage learning and doubly robust deep neural networks
Deep neural networks (DNNs) have demonstrated remarkable empirical performance in large-scale supervised learning problems, particularly in scenarios where both the sample size $n$ and the dimension of covariates $p$ are large. This study delves into the application of DNNs across a wide spectrum of intricate causal inference tasks, where direct estimation falls short and necessitates multi-stage learning. Examples include estimating the conditional average treatment effect and dynamic treatment effect. In this framework, DNNs are constructed sequentially, with subsequent stages building upon preceding ones. To mitigate the impact of estimation errors from early stages on subsequent ones, we integrate DNNs in a doubly robust manner. In contrast to previous research, our study offers theoretical assurances regarding the effectiveness of DNNs in settings where the dimensionality $p$ expands with the sample size. These findings are significant independently and extend to degenerate single-stage learning problems.
[ "['Yuqian Zhang' 'Jelena Bradic']" ]
null
null
2407.08567
null
null
http://arxiv.org/pdf/2407.08567v1
2024-07-11T14:57:27Z
2024-07-11T14:57:27Z
Adaptive Parametric Activation
The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET. The code is available at https://github.com/kostas1515/AGLU.
[ "['Konstantinos Panagiotis Alexandridis' 'Jiankang Deng' 'Anh Nguyen'\n 'Shan Luo']" ]
null
null
2407.08571
null
null
http://arxiv.org/pdf/2407.08571v1
2024-07-11T14:59:17Z
2024-07-11T14:59:17Z
Multi-Group Proportional Representation
Image search and retrieval tasks can perpetuate harmful stereotypes, erase cultural identities, and amplify social disparities. Current approaches to mitigate these representational harms balance the number of retrieved items across population groups defined by a small number of (often binary) attributes. However, most existing methods overlook intersectional groups determined by combinations of group attributes, such as gender, race, and ethnicity. We introduce Multi-Group Proportional Representation (MPR), a novel metric that measures representation across intersectional groups. We develop practical methods for estimating MPR, provide theoretical guarantees, and propose optimization algorithms to ensure MPR in retrieval. We demonstrate that existing methods optimizing for equal and proportional representation metrics may fail to promote MPR. Crucially, our work shows that optimizing MPR yields more proportional representation across multiple intersectional groups specified by a rich function class, often with minimal compromise in retrieval accuracy.
[ "['Alex Oesterling' 'Claudio Mayrink Verdun' 'Carol Xuan Long' 'Alex Glynn'\n 'Lucas Monteiro Paes' 'Sajani Vithana' 'Martina Cardone'\n 'Flavio P. Calmon']" ]
null
null
2407.08583
null
null
http://arxiv.org/pdf/2407.08583v1
2024-07-11T15:08:11Z
2024-07-11T15:08:11Z
The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective
The rapid development of large language models (LLMs) has been witnessed in recent years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from text to a broader spectrum of domains, attracting widespread attention due to the broader range of application scenarios. As LLMs and MLLMs rely on vast amounts of model parameters and data to achieve emergent capabilities, the importance of data is receiving increasingly widespread attention and recognition. Tracing and analyzing recent data-oriented works for MLLMs, we find that the development of models and data is not two separate paths but rather interconnected. On the one hand, vaster and higher-quality data contribute to better performance of MLLMs, on the other hand, MLLMs can facilitate the development of data. The co-development of multi-modal data and MLLMs requires a clear view of 1) at which development stage of MLLMs can specific data-centric approaches be employed to enhance which capabilities, and 2) by utilizing which capabilities and acting as which roles can models contribute to multi-modal data. To promote the data-model co-development for MLLM community, we systematically review existing works related to MLLMs from the data-model co-development perspective. A regularly maintained project associated with this survey is accessible at https://github.com/modelscope/data-juicer/blob/main/docs/awesome_llm_data.md.
[ "['Zhen Qin' 'Daoyuan Chen' 'Wenhao Zhang' 'Liuyi Yao' 'Yilun Huang'\n 'Bolin Ding' 'Yaliang Li' 'Shuiguang Deng']" ]
null
null
2407.08585
null
null
http://arxiv.org/pdf/2407.08585v1
2024-07-11T15:10:14Z
2024-07-11T15:10:14Z
HACMan++: Spatially-Grounded Motion Primitives for Manipulation
Although end-to-end robot learning has shown some success for robot manipulation, the learned policies are often not sufficiently robust to variations in object pose or geometry. To improve the policy generalization, we introduce spatially-grounded parameterized motion primitives in our method HACMan++. Specifically, we propose an action representation consisting of three components: what primitive type (such as grasp or push) to execute, where the primitive will be grounded (e.g. where the gripper will make contact with the world), and how the primitive motion is executed, such as parameters specifying the push direction or grasp orientation. These three components define a novel discrete-continuous action space for reinforcement learning. Our framework enables robot agents to learn to chain diverse motion primitives together and select appropriate primitive parameters to complete long-horizon manipulation tasks. By grounding the primitives on a spatial location in the environment, our method is able to effectively generalize across object shape and pose variations. Our approach significantly outperforms existing methods, particularly in complex scenarios demanding both high-level sequential reasoning and object generalization. With zero-shot sim-to-real transfer, our policy succeeds in challenging real-world manipulation tasks, with generalization to unseen objects. Videos can be found on the project website: https://sgmp-rss2024.github.io.
[ "['Bowen Jiang' 'Yilin Wu' 'Wenxuan Zhou' 'Chris Paxton' 'David Held']" ]
null
null
2407.08590
null
null
http://arxiv.org/pdf/2407.08590v1
2024-07-11T15:13:28Z
2024-07-11T15:13:28Z
A Review of Nine Physics Engines for Reinforcement Learning Research
We present a review of popular simulation engines and frameworks used in reinforcement learning (RL) research, aiming to guide researchers in selecting tools for creating simulated physical environments for RL and training setups. It evaluates nine frameworks (Brax, Chrono, Gazebo, MuJoCo, ODE, PhysX, PyBullet, Webots, and Unity) based on their popularity, feature range, quality, usability, and RL capabilities. We highlight the challenges in selecting and utilizing physics engines for RL research, including the need for detailed comparisons and an understanding of each framework's capabilities. Key findings indicate MuJoCo as the leading framework due to its performance and flexibility, despite usability challenges. Unity is noted for its ease of use but lacks scalability and simulation fidelity. The study calls for further development to improve simulation engines' usability and performance and stresses the importance of transparency and reproducibility in RL research. This review contributes to the RL community by offering insights into the selection process for simulation engines, facilitating informed decision-making.
[ "['Michael Kaup' 'Cornelius Wolff' 'Hyerim Hwang' 'Julius Mayer'\n 'Elia Bruni']" ]
null
null
2407.08597
null
null
http://arxiv.org/pdf/2407.08597v1
2024-07-11T15:25:02Z
2024-07-11T15:25:02Z
Learning Program Behavioral Models from Synthesized Input-Output Pairs
We introduce Modelizer - a novel framework that, given a black-box program, learns a _model from its input/output behavior_ using _neural machine translation_. The resulting model _mocks_ the original program: Given an input, the model predicts the output that would have been produced by the program. However, the model is also _reversible_ - that is, the model can predict the input that would have produced a given output. Finally, the model is _differentiable_ and can be efficiently restricted to predict only a certain aspect of the program behavior. Modelizer uses _grammars_ to synthesize inputs and to parse the resulting outputs, allowing it to learn sequence-to-sequence associations between token streams. Other than input and output grammars, Modelizer only requires the ability to execute the program. The resulting models are _small_, requiring fewer than 6.3 million parameters for languages such as Markdown or HTML; and they are _accurate_, achieving up to 95.4% accuracy and a BLEU score of 0.98 with standard error 0.04 in mocking real-world applications. We foresee several _applications_ of these models, especially as the output of the program can be any aspect of program behavior. Besides mocking and predicting program behavior, the model can also synthesize inputs that are likely to produce a particular behavior, such as failures or coverage.
[ "['Tural Mammadov' 'Dietrich Klakow' 'Alexander Koller' 'Andreas Zeller']" ]
null
null
2407.08608
null
null
http://arxiv.org/pdf/2407.08608v2
2024-07-12T22:15:02Z
2024-07-11T15:44:48Z
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
Attention, as a core layer of the ubiquitous Transformer architecture, is the bottleneck for large language models and long-context applications. FlashAttention elaborated an approach to speed up attention on GPUs through minimizing memory reads/writes. However, it has yet to take advantage of new capabilities present in recent hardware, with FlashAttention-2 achieving only 35% utilization on the H100 GPU. We develop three main techniques to speed up attention on Hopper GPUs: exploiting asynchrony of the Tensor Cores and TMA to (1) overlap overall computation and data movement via warp-specialization and (2) interleave block-wise matmul and softmax operations, and (3) block quantization and incoherent processing that leverages hardware support for FP8 low-precision. We demonstrate that our method, FlashAttention-3, achieves speedup on H100 GPUs by 1.5-2.0$times$ with FP16 reaching up to 740 TFLOPs/s (75% utilization), and with FP8 reaching close to 1.2 PFLOPs/s. We validate that FP8 FlashAttention-3 achieves 2.6$times$ lower numerical error than a baseline FP8 attention.
[ "['Jay Shah' 'Ganesh Bikshandi' 'Ying Zhang' 'Vijay Thakkar'\n 'Pradeep Ramani' 'Tri Dao']" ]
null
null
2407.08610
null
null
http://arxiv.org/pdf/2407.08610v1
2024-07-11T15:48:36Z
2024-07-11T15:48:36Z
Semantic GUI Scene Learning and Video Alignment for Detecting Duplicate Video-based Bug Reports
Video-based bug reports are increasingly being used to document bugs for programs centered around a graphical user interface (GUI). However, developing automated techniques to manage video-based reports is challenging as it requires identifying and understanding often nuanced visual patterns that capture key information about a reported bug. In this paper, we aim to overcome these challenges by advancing the bug report management task of duplicate detection for video-based reports. To this end, we introduce a new approach, called JANUS, that adapts the scene-learning capabilities of vision transformers to capture subtle visual and textual patterns that manifest on app UI screens - which is key to differentiating between similar screens for accurate duplicate report detection. JANUS also makes use of a video alignment technique capable of adaptive weighting of video frames to account for typical bug manifestation patterns. In a comprehensive evaluation on a benchmark containing 7,290 duplicate detection tasks derived from 270 video-based bug reports from 90 Android app bugs, the best configuration of our approach achieves an overall mRR/mAP of 89.8%/84.7%, and for the large majority of duplicate detection tasks, outperforms prior work by around 9% to a statistically significant degree. Finally, we qualitatively illustrate how the scene-learning capabilities provided by Janus benefits its performance.
[ "['Yanfu Yan' 'Nathan Cooper' 'Oscar Chaparro' 'Kevin Moran'\n 'Denys Poshyvanyk']" ]
null
null
2407.08623
null
null
http://arxiv.org/pdf/2407.08623v1
2024-07-11T16:00:22Z
2024-07-11T16:00:22Z
Surpassing Cosine Similarity for Multidimensional Comparisons: Dimension Insensitive Euclidean Metric (DIEM)
The advancement in computational power and hardware efficiency has enabled the tackling of increasingly complex and high-dimensional problems. While artificial intelligence (AI) has achieved remarkable results in various scientific and technological fields, the interpretability of these high-dimensional solutions remains challenging. A critical issue in this context is the comparison of multidimensional quantities, which is essential in techniques like Principal Component Analysis (PCA), Singular Value Decomposition (SVD), and k-means clustering. Common metrics such as cosine similarity, Euclidean distance, and Manhattan distance are often used for such comparisons - for example in muscular synergies of the human motor control system. However, their applicability and interpretability diminish as dimensionality increases. This paper provides a comprehensive analysis of the effects of dimensionality on these three widely used metrics. Our results reveal significant limitations of cosine similarity, particularly its dependency on the dimensionality of the vectors, leading to biased and less interpretable outcomes. To address this, we introduce the Dimension Insensitive Euclidean Metric (DIEM), derived from the Euclidean distance, which demonstrates superior robustness and generalizability across varying dimensions. DIEM maintains consistent variability and eliminates the biases observed in traditional metrics, making it a more reliable tool for high-dimensional comparisons. This novel metric has the potential to replace cosine similarity, providing a more accurate and insightful method to analyze multidimensional data in fields ranging from neuromotor control to machine learning and deep learning.
[ "['Federico Tessari' 'Neville Hogan']" ]
null
null
2407.08626
null
null
http://arxiv.org/pdf/2407.08626v1
2024-07-11T16:05:56Z
2024-07-11T16:05:56Z
RoboMorph: Evolving Robot Morphology using Large Language Models
We introduce RoboMorph, an automated approach for generating and optimizing modular robot designs using large language models (LLMs) and evolutionary algorithms. In this framework, we represent each robot design as a grammar and leverage the capabilities of LLMs to navigate the extensive robot design space, which is traditionally time-consuming and computationally demanding. By integrating automatic prompt design and a reinforcement learning based control algorithm, RoboMorph iteratively improves robot designs through feedback loops. Our experimental results demonstrate that RoboMorph can successfully generate nontrivial robots that are optimized for a single terrain while showcasing improvements in morphology over successive evolutions. Our approach demonstrates the potential of using LLMs for data-driven and modular robot design, providing a promising methodology that can be extended to other domains with similar design frameworks.
[ "['Kevin Qiu' 'Krzysztof Ciebiera' 'Paweł Fijałkowski' 'Marek Cygan'\n 'Łukasz Kuciński']" ]
null
null
2407.08632
null
null
http://arxiv.org/pdf/2407.08632v1
2024-07-11T16:12:53Z
2024-07-11T16:12:53Z
Generalization Error Matters in Decentralized Learning Under Byzantine Attacks
Recently, decentralized learning has emerged as a popular peer-to-peer signal and information processing paradigm that enables model training across geographically distributed agents in a scalable manner, without the presence of any central server. When some of the agents are malicious (also termed as Byzantine), resilient decentralized learning algorithms are able to limit the impact of these Byzantine agents without knowing their number and identities, and have guaranteed optimization errors. However, analysis of the generalization errors, which are critical to implementations of the trained models, is still lacking. In this paper, we provide the first analysis of the generalization errors for a class of popular Byzantine-resilient decentralized stochastic gradient descent (DSGD) algorithms. Our theoretical results reveal that the generalization errors cannot be entirely eliminated because of the presence of the Byzantine agents, even if the number of training samples are infinitely large. Numerical experiments are conducted to confirm our theoretical results.
[ "['Haoxiang Ye' 'Qing Ling']" ]
null
null
2407.08639
null
null
http://arxiv.org/pdf/2407.08639v1
2024-07-11T16:21:18Z
2024-07-11T16:21:18Z
$β$-DPO: Direct Preference Optimization with Dynamic $β$
Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences. However, the performance of DPO is sensitive to the fine-tuning of its trade-off parameter $beta$, as well as to the quality of the preference data. We analyze the impact of $beta$ and data quality on DPO, uncovering that optimal $beta$ values vary with the informativeness of pairwise data. Addressing the limitations of static $beta$ values, we introduce a novel framework that dynamically calibrates $beta$ at the batch level, informed by data quality considerations. Additionally, our method incorporates $beta$-guided data filtering to safeguard against the influence of outliers. Through empirical evaluation, we demonstrate that our dynamic $beta$ adjustment technique significantly improves DPO's performance across a range of models and datasets, offering a more robust and adaptable training paradigm for aligning LLMs with human feedback. The code is available at url{https://github.com/junkangwu/beta-DPO}.
[ "['Junkang Wu' 'Yuexiang Xie' 'Zhengyi Yang' 'Jiancan Wu' 'Jinyang Gao'\n 'Bolin Ding' 'Xiang Wang' 'Xiangnan He']" ]