text
stringlengths
62
2.94k
Exploring Diffusion Models for Unsupervised Video Anomaly Detection ; This paper investigates the performance of diffusion models for video anomaly detection VAD within the most challenging but also the most operational scenario in which the data annotations are not used. As being sparse, diverse, contextual, and often ambiguous, detecting abnormal events precisely is a very ambitious task. To this end, we rely only on the informationrich spatiotemporal data, and the reconstruction power of the diffusion models such that a high reconstruction error is utilized to decide the abnormality. Experiments performed on two largescale video anomaly detection datasets demonstrate the consistent improvement of the proposed method over the stateoftheart generative models while in some cases our method achieves better scores than the more complex models. This is the first study using a diffusion model and examining its parameters' influence to present guidance for VAD in surveillance scenarios.
A robust design of timevarying internal model principlebased control for ultraprecision tracking in a directdrive servo stage ; This paper proposes a robust design of the timevarying internal model principlebased control TVIMPC for tracking sophisticated references generated by linear timevarying LTV autonomous systems. The existing TVIMPC design usually requires a complete knowledge of the plant IO inputoutput model, leading to the lack of structural robustness. To tackle this issue, we, in the paper, design a graybox extended state observer ESO to estimate and compensate unknown model uncertainties and external disturbances. By means of the ESO feedback, the plant model is kept as nominal, and hence the structural robustness is achieved for the timevarying internal model. It is shown that the proposed design has bounded ESO estimation errors, which can be further adjusted by modifying the corresponding control gains. To stabilize the ESObased TVIMPC, a timevarying stabilizer is developed by employing Linear Matrix Inequalities LMIs. Extensive simulation and experimental studies are conducted on a directdrive servo stage to validate the proposed robust TVIMPC with ultraprecision tracking performance sim 60nm RMSE out of pm80mm stroke.
SEA A Scalable Entity Alignment System ; Entity alignment EA aims to find equivalent entities in different knowledge graphs KGs. Stateoftheart EA approaches generally use Graph Neural Networks GNNs to encode entities. However, most of them train the models and evaluate the results in a fullbatch fashion, which prohibits EA from being scalable on largescale datasets. To enhance the usability of GNNbased EA models in realworld applications, we present SEA, a scalable entity alignment system that enables to i train largescale GNNs for EA, ii speed up the normalization and the evaluation process, and iii report clear results for users to estimate different models and parameter settings. SEA can be run on a computer with merely one graphic card. Moreover, SEA encompasses six stateoftheart EA models and provides access for users to quickly establish and evaluate their own models. Thus, SEA allows users to perform EA without being involved in tedious implementations, such as negative sampling and GPUaccelerated evaluation. With SEA, users can gain a clear view of the model performance. In the demonstration, we show that SEA is userfriendly and is of high scalability even on computers with limited computational resources.
Phantom Embeddings Using Embedding Space for Model Regularization in Deep Neural Networks ; The strength of machine learning models stems from their ability to learn complex function approximations from data; however, this strength also makes training deep neural networks challenging. Notably, the complex models tend to memorize the training data, which results in poor regularization performance on test data. The regularization techniques such as L1, L2, dropout, etc. are proposed to reduce the overfitting effect; however, they bring in additional hyperparameters tuning complexity. These methods also fall short when the interclass similarity is high due to the underlying data distribution, leading to a less accurate model. In this paper, we present a novel approach to regularize the models by leveraging the informationrich latent embeddings and their high intraclass correlation. We create phantom embeddings from a subset of homogenous samples and use these phantom embeddings to decrease the interclass similarity of instances in their latent embedding space. The resulting models generalize better as a combination of their embedding and regularize them without requiring an expensive hyperparameter search. We evaluate our method on two popular and challenging image classification datasets CIFAR and FashionMNIST and show how our approach outperforms the standard baselines while displaying better training behavior.
Metrics for Bayesian Optimal Experiment Design under Model Misspecification ; The conventional approach to Bayesian decisiontheoretic experiment design involves searching over possible experiments to select a design that maximizes the expected value of a specified utility function. The expectation is over the joint distribution of all unknown variables implied by the statistical model that will be used to analyze the collected data. The utility function defines the objective of the experiment where a common utility function is the information gain. This article introduces an expanded framework for this process, where we go beyond the traditional Expected Information Gain criteria and introduce the Expected General Information Gain which measures robustness to the model discrepancy and Expected Discriminatory Information as a criterion to quantify how well an experiment can detect model discrepancy. The functionality of the framework is showcased through its application to a scenario involving a linearized spring mass damper system and an F16 model where the model discrepancy is taken into account while doing Bayesian optimal experiment design.
Economic Origins of the Sicilian Mafia A Simulation Feedback Model ; This chapter develops a feedback economic model that explains the rise of the Sicilian mafia in the 19th century. Grounded in economic theory, the model incorporates causal relationships between the mafia activities, predation, law enforcement, and the profitability of local businesses. Using computational experiments with the model, we explore how different factors and feedback effects impact the mafia activity levels. The model explains important historical observations such as the emergence of the mafia in wealthier regions and its absence in the poorer districts despite the greater levels of banditry.
Holographic inflation in nonstatic plane symmetric spacetime ; The current analysis uses the nonstatic plane symmetric spacetime to dynamically examine the holographic dark energy model as a candidates of IR cutoffs specifically Hubble's and GrandaOliveros cutoff. Using the Markov Chain Monte Carlo MCMC method, we estimate the best fit values for the model parameters imposed from the combined datasets of CCSCBAO. Now, it has been found that the characteristics of spacetime that have been addressed and formulated using both models are flat universe and observed that the model appears to be in good agreement with the observations. In addition, we investigate the behavior of equation of state parameters along with the energy conditions. Finally, we found that in both the cutoffs the models predict that the present and late universe are accelerating and the equation of state parameter behaves like the quintessence model.
Neural networks for geospatial data ; Analysis of geospatial data has traditionally been modelbased, with a mean model, customarily specified as a linear regression on the covariates, and a covariance model, encoding the spatial dependence. We relax the strong assumption of linearity and propose embedding neural networks directly within the traditional geostatistical models to accommodate nonlinear mean functions while retaining all other advantages including use of Gaussian Processes to explicitly model the spatial covariance, enabling inference on the covariate effect through the mean and on the spatial dependence through the covariance, and offering predictions at new locations via kriging. We propose NNGLS, a new neural network estimation algorithm for the nonlinear mean in GP models that explicitly accounts for the spatial covariance through generalized least squares GLS, the same loss used in the linear case. We show that NNGLS admits a representation as a special type of graph neural network GNN. This connection facilitates use of standard neural network computational techniques for irregular geospatial data, enabling novel and scalable minibatching, backpropagation, and kriging schemes. Theoretically, we show that NNGLS will be consistent for irregularly observed spatially correlated data processes. To our knowledge this is the first asymptotic consistency result for any neural network algorithm for spatial data. We demonstrate the methodology through simulated and real datasets.
CancerGPT Fewshot Drug Pair Synergy Prediction using Large Pretrained Language Models ; Large pretrained language models LLMs have been shown to have significant potential in fewshot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology, has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Our proposed fewshot learning approach uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrated that the LLMbased prediction model achieved significant accuracy with very few or zero samples. Our proposed model, the CancerGPT with sim 124M parameters, was even comparable to the larger finetuned GPT3 model with sim 175B parameters. Our research is the first to tackle drug pair synergy prediction in rare tissues with limited data. We are also the first to utilize an LLMbased prediction model for biological reaction prediction tasks.
Static and smallsignal modeling of radiofrequency hexagonal boron nitride switches ; A first modeling approximation of the general performance of radiofrequency RF switches based on hexagonal boron nitride hBN, a twodimensional 2D dielectric material is presented. The IV characteristics intrinsic and extrinsic impedance parameters, the return loss, insertion loss and isolation of RF 2D switches fabricated with hBN are described here by a equivalent circuit models. Straightforward analytical expressions are obtained. In contrast to conventional switches, the unique RF performance of the hBN switch, at ONstate, i.e., a direct improvement with frequency of the insertion loss, is accurately described by considering a capacitor in the intrinsic part of the model. The latter is suggested to be related to storaged charge during the resistive switching mechanism. The highest mean relative error obtained between modeling and measurements of the return loss is of 7.6 with the approach presented here which overcomes the 42.5 of difference obtained with a previous model with an incomplete intrinsic device description.
What does BERT learn about prosody ; Language models have become nearly ubiquitous in natural language processing applications achieving stateoftheart results in many tasks including prosody. As the model design does not define predetermined linguistic targets during training but rather aims at learning generalized representations of the language, analyzing and interpreting the representations that models implicitly capture is important in bridging the gap between interpretability and model performance. Several studies have explored the linguistic information that models capture providing some insights on their representational capacity. However, the current studies have not explored whether prosody is part of the structural information of the language that models learn. In this work, we perform a series of experiments on BERT probing the representations captured at different layers. Our results show that information about prosodic prominence spans across many layers but is mostly focused in middle layers suggesting that BERT relies mostly on syntactic and semantic information.
ESimCSE Unsupervised Contrastive Learning Jointly with UDA SemiSupervised Learning for Large Label System Text Classification Mode ; The challenges faced by text classification with large tag systems in natural language processing tasks include multiple tag systems, uneven data distribution, and high noise. To address these problems, the ESimCSE unsupervised comparative learning and UDA semisupervised comparative learning models are combined through the use of joint training techniques in the models.The ESimCSE model efficiently learns text vector representations using unlabeled data to achieve better classification results, while UDA is trained using unlabeled data through semisupervised learning methods to improve the prediction performance of the models and stability, and further improve the generalization ability of the model. In addition, adversarial training techniques FGM and PGD are used in the model training process to improve the robustness and reliability of the model. The experimental results show that there is an 8 and 10 accuracy improvement relative to Baseline on the public dataset Ruesters as well as on the operational dataset, respectively, and a 15 improvement in manual validation accuracy can be achieved on the operational dataset, indicating that the method is effective.
FLEX an Adaptive Exploration Algorithm for Nonlinear Systems ; Modelbased reinforcement learning is a powerful tool, but collecting data to fit an accurate model of the system can be costly. Exploring an unknown environment in a sampleefficient manner is hence of great importance. However, the complexity of dynamics and the computational limitations of real systems make this task challenging. In this work, we introduce FLEX, an exploration algorithm for nonlinear dynamics based on optimal experimental design. Our policy maximizes the information of the next step and results in an adaptive exploration algorithm, compatible with generic parametric learning models and requiring minimal resources. We test our method on a number of nonlinear environments covering different settings, including timevarying dynamics. Keeping in mind that exploration is intended to serve an exploitation objective, we also test our algorithm on downstream modelbased classical control tasks and compare it to other stateoftheart modelbased and modelfree approaches. The performance achieved by FLEX is competitive and its computational cost is low.
Leveraging Compositional Methods for Modeling and Verification of an Autonomous Taxi System ; We apply a compositional formal modeling and verification method to an autonomous aircraft taxi system. We provide insights into the modeling approach and we identify several research areas where further development is needed. Specifically, we identify the following needs 1 semantics of composition of viewpoints expressed in different specification languages, and tools to reason about heterogeneous declarative models; 2 libraries of formal models for autonomous systems to speed up modeling and enable efficient reasoning; 3 methods to lift verification results generated by automated reasoning tools to the specification level; 4 probabilistic contract frameworks to reason about imperfect implementations; 5 standard highlevel functional architectures for autonomous systems; and 6 a theory of higherorder contracts. We believe that addressing these research needs, among others, could improve the adoption of formal methods in the design of autonomous systems including learningenabled systems, and increase confidence in their safe operations.
Quantum manybody scars in spin models with multibody interactions ; We introduce and study several classes of quantum spin models with multibody interactions that exhibit quantum manybody scars. The models are constructed by two different methods one exploiting boundary states in integrable spin chains and the other based on a variant of existing methods such as restricted spectrum generating algebras. The first method allows us to construct deformations of the MajumdarGhosh and AffleckKennedyLiebTasaki models prototypes of frustrationfree systems. With the second method, we construct a large class of spin1 models involving scalar spin chirality in both one and two dimensions. Interestingly, in some cases, the models so constructed have towers of scar states of different character. For each example, we show that the scar states behave differently from thermal states by comparing their spectral and dynamical properties with those of other states. We also show that a superposition of the scar states constructed by the second method exhibits perfectly periodic revivals in the dynamics.
Experimental features of emissions and fuel consumption in a carfollowing platoon ; The paper investigates the features of emissions and fuel consumption EFC in a carfollowing CF platoon based on two experimental datasets. Four classical EFC models are employed and a universal concave growth pattern of the EFC along a platoon has been demonstrated. A general framework of coupling EFC and CF models is tested by calibrating and simulating three classical CF models. This work first demonstrates that, at vehiclepair level, all models perform well on EFC prediction. The intelligent driver model outperforms the other CF models on calibration accuracy, but this is not true on EFC prediction. Second, at platoon level, the predicted EFC is nearly constant along the platoon which qualitatively differs from the experimental observation. The investigation highlights that accurate estimations at vehicle level may be insufficient for analysis at platoon level due to the significant role of oscillation growth and evolution in EFC estimation.
MachineLearned Invertible Coarse Graining for Multiscale Molecular Modeling ; Multiscale molecular modeling is widely applied in scientific research of molecular properties over large time and length scales. Two specific challenges are commonly present in multiscale modeling, provided that information between the coarse and fine representations of molecules needs to be properly exchanged One is to construct coarse grained CG models by passing information from the fine to coarse levels; the other is to restore finer molecular details given CG configurations. Although these two problems are commonly addressed independently, in this work, we present a theory connecting them, and develop a methodology called Cycle Coarse Graining CCG to solve both problems in a unified manner. In CCG, reconstruction can be achieved via a tractable optimization process, leading to a general method to retrieve fine details from CG simulations, which in turn, delivers a new solution to the CG problem, yielding an efficient way to calculate free energies in a rareeventfree manner. CCG thus provides a systematic way for multiscale molecular modeling, where the finer details of CG simulations can be efficiently retrieved, and the CG models can be improved consistently.
FreeLM FineTuningFree Language Model ; Pretrained language models PLMs have achieved remarkable success in NLP tasks. Despite the great success, mainstream solutions largely follow the pretraining then finetuning paradigm, which brings in both high deployment costs and low training efficiency. Nevertheless, finetuning on a specific task is essential because PLMs are only pretrained with language signal from large raw data. In this paper, we propose a novel finetuningfree strategy for language models, to consider both language signal and teacher signal. Teacher signal is an abstraction of a battery of downstream tasks, provided in a unified proposition format. Trained with both language and strong taskaware teacher signals in an interactive manner, our FreeLM model demonstrates strong generalization and robustness. FreeLM outperforms large models e.g., GPT3 and InstructGPT, on a range of language understanding tasks in experiments. FreeLM is much smaller with 0.3B parameters, compared to 175B in these models.
Explaining Language Models' Predictions with HighImpact Concepts ; The emergence of largescale pretrained language models has posed unprecedented challenges in deriving explanations of why the model has made some predictions. Stemmed from the compositional nature of languages, spurious correlations have further undermined the trustworthiness of NLP systems, leading to unreliable model explanations that are merely correlated with the output predictions. To encourage fairness and transparency, there exists an urgent demand for reliable explanations that allow users to consistently understand the model's behavior. In this work, we propose a complete framework for extending conceptbased interpretability methods to NLP. Specifically, we propose a posthoc interpretability method for extracting predictive highlevel features concepts from the pretrained model's hidden layer activations. We optimize for features whose existence causes the output predictions to change substantially, ie generates a high impact. Moreover, we devise several evaluation metrics that can be universally applied. Extensive experiments on real and synthetic tasks demonstrate that our method achieves superior results on predictive impact, usability, and faithfulness compared to the baselines.
Making the Most of What You Have Adapting Pretrained Visual Language Models in the Lowdata Regime ; Largescale visual language models are widely used as pretrained models and then adapted for various downstream tasks. While humans are known to efficiently learn new tasks from a few examples, deep learning models struggle with adaptation from few examples. In this work, we look into task adaptation in the lowdata regime, and provide a thorough study of the existing adaptation methods for generative Visual Language Models. And we show important benefits of selflabelling, i.e. using the model's own predictions to selfimprove when having access to a larger number of unlabelled images of the same distribution. Our study demonstrates significant gains using our proposed task adaptation pipeline across a wide range of visual language tasks such as visual classification ImageNet, visual captioning COCO, detailed visual captioning Localised Narratives and visual question answering VQAv2.
How to Choose Pretrained Handwriting Recognition Models for Single Writer FineTuning ; Recent advancements in Deep Learningbased Handwritten Text Recognition HTR have led to models with remarkable performance on both modern and historical manuscripts in large benchmark datasets. Nonetheless, those models struggle to obtain the same performance when applied to manuscripts with peculiar characteristics, such as language, paper support, ink, and author handwriting. This issue is very relevant for valuable but small collections of documents preserved in historical archives, for which obtaining sufficient annotated training data is costly or, in some cases, unfeasible. To overcome this challenge, a possible solution is to pretrain HTR models on large datasets and then finetune them on small singleauthor collections. In this paper, we take into account large, real benchmark datasets and synthetic ones obtained with a styled Handwritten Text Generation model. Through extensive experimental analysis, also considering the amount of finetuning lines, we give a quantitative indication of the most relevant characteristics of such data for obtaining an HTR model able to effectively transcribe manuscripts in small collections with as little as five real finetuning lines.
Analyzing Ecological Momentary Assessment Data with StateSpace Models Considerations and Recommendations ; Ecological momentary assessment EMA data have a broad base of application in the study of time trends and relations. In EMA studies, there are a number of design considerations which influence the analysis of the data. One general modeling framework is particularly wellsuited for these analyses statespace modeling. Here, we present the statespace modeling framework with recommendations for the considerations that go into modeling EMA data. These recommendations can account for the issues that come up in EMA data analysis such as idiographic versus nomothetic modeling, missing data, and stationary versus nonstationary data. In addition, we suggest R packages in order to implement these recommendations in practice. Overall, welldesigned EMA studies offer opportunities for researchers to handle the momentary minutiae in their assessment of psychological phenomena.
Statistical Inference for Fairness Auditing ; Before deploying a blackbox model in highstakes problems, it is important to evaluate the model's performance on sensitive subpopulations. For example, in a recidivism prediction task, we may wish to identify demographic groups for which our prediction model has unacceptably high false positive rates or certify that no such groups exist. In this paper, we frame this task, often referred to as fairness auditing, in terms of multiple hypothesis testing. We show how the bootstrap can be used to simultaneously bound performance disparities over a collection of groups with statistical guarantees. Our methods can be used to flag subpopulations affected by model underperformance, and certify subpopulations for which the model performs adequately. Crucially, our audit is modelagnostic and applicable to nearly any performance metric or group fairness criterion. Our methods also accommodate extremely rich even infinite collections of subpopulations. Further, we generalize beyond subpopulations by showing how to assess performance over certain distribution shifts. We test the proposed methods on benchmark datasets in predictive inference and algorithmic fairness and find that our audits can provide interpretable and trustworthy guarantees.
An Adversarial NonAutoregressive Model for Text Generation with Incomplete Information ; Nonautoregressive models have been widely studied in the Complete Information Scenario CIS, in which the models have complete input information to obtain corresponding output. However, their explorations in the Incomplete Information Scenario IIS are extremely limited. Our analyses reveal that the IIS's incomplete input information will augment the inherent limitations of existing nonautoregressive models trained under Maximum Likelihood Estimation. In this paper, we propose for the IIS an Adversarial Nonautoregressive Transformer ANT which has two novel features 1 Position Aware SelfModulation to provide more reasonable hidden representations, and 2 Dependency Feed Forward Network to strengthen its capacity in dependency modeling. We compare ANT with other mainstream models in the IIS and demonstrate that ANT can achieve comparable performance with much fewer decoding iterations. Furthermore, we show its great potential in various applications like latent interpolation and semisupervised learning.
Memory CODA introducing memory effects in the Continuous Opinions and Discrete Actions model ; The Continuous Opinions and Discrete Actions CODA model has been widely used to study the emergence of extremism in social networks. However, this standard model has been shown to generate unrealistic extreme opinions due to the reinforcement among agents. To address this issue, this paper introduces memory effects into the CODA model to explore how the dynamics of opinion formation change. Specifically, each agent is endowed with a memory that stores the previous opinions of its neighbors, which are then utilized to update its own opinion. The paper investigates how incorporating memory affects the strength of choices. We will see that while diminishing the opinion strength, the formation of local domains still causes a significant reinforcement effect. However, unlike the original model, the number of neighbors becomes a relevant variable, suggesting a way to test the results presented in this paper. Keywords Opinion dynamics, CODA, Agentbased models, Memory, extremism
Profile likelihoods for parameters in Gaussian geostatistical models ; Profile likelihoods are rarely used in geostatistical models due to the computational burden imposed by repeated decompositions of large variance matrices. Accounting for uncertainty in covariance parameters can be highly consequential in geostatistical models as some covariance parameters are poorly identified, the problem is severe enough that the differentiability parameter of the Matern correlation function is typically treated as fixed. The problem is compounded with anisotropic spatial models as there are two additional parameters to consider. In this paper, we make the following contributions 1, A methodology is created for profile likelihoods for Gaussian spatial models with Mat'ern family of correlation functions, including anisotropic models. This methodology adopts a novel reparametrization for generation of representative points, and uses GPUs for parallel profile likelihoods computation in software implementation. 2, We show the profile likelihood of the Mat'ern shape parameter is often quite flat but still identifiable, it can usually rule out very small values. 3, Simulation studies and applications on real data examples show that profilebased confidence intervals of covariance parameters and regression parameters have superior coverage to the traditional standard Wald type confidence intervals.
FedHB Hierarchical Bayesian Federated Learning ; We propose a novel hierarchical Bayesian approach to Federated Learning FL, where our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling constituting random variables of local models for clients that are governed by a higherlevel global variate. Interestingly, the variational inference in our Bayesian model leads to an optimisation problem whose blockcoordinate descent solution becomes a distributed algorithm that is separable over clients and allows them not to reveal their own private data at all, thus fully compatible with FL. We also highlight that our blockcoordinate algorithm has particular forms that subsume the wellknown FL algorithms including FedAvg and FedProx as special cases. Beyond introducing novel modeling and derivations, we also offer convergence analysis showing that our blockcoordinate FL algorithm converges to an local optimum of the objective at the rate of O1sqrtt, the same rate as regular centralised SGD, as well as the generalisation error analysis where we prove that the test error of our model on unseen data is guaranteed to vanish as we increase the training data size, thus asymptotically optimal.
Twin Sterile Neutrino Dark Matter ; We propose that the dark matter of our universe could be sterile neutrinos which reside within the twin sector of a mirror twin Higgs model. In our scenario, these particles are produced through a version of the DodelsonWidrow mechanism that takes place entirely within the twin sector, yielding a dark matter candidate that is consistent with Xray and gammaray line constraints. Furthermore, this scenario can naturally avoid the cosmological problems that are typically encountered in mirror twin Higgs models. In particular, if the sterile neutrinos in the Standard Model sector decay out of equilibrium, they can heat the Standard Model bath and reduce the contributions of the twin particles to Nmathrmeff. Such decays also reduce the effective temperature of the dark matter, thereby relaxing constraints from largescale structure. The sterile neutrinos included in this model are compatible with the seesaw mechanism for generating Standard Model neutrino masses.
HyHTM Hyperbolic Geometry based Hierarchical Topic Models ; Hierarchical Topic Models HTMs are useful for discovering topic hierarchies in a collection of documents. However, traditional HTMs often produce hierarchies where lowerlevel topics are unrelated and not specific enough to their higherlevel topics. Additionally, these methods can be computationally expensive. We present HyHTM a Hyperbolic geometry based Hierarchical Topic Models that addresses these limitations by incorporating hierarchical information from hyperbolic geometry to explicitly model hierarchies in topic models. Experimental results with four baselines show that HyHTM can better attend to parentchild relationships among topics. HyHTM produces coherent topic hierarchies that specialise in granularity from generic higherlevel topics to specific lowerlevel topics. Further, our model is significantly faster and leaves a much smaller memory footprint than our bestperforming baseline.We have made the source code for our algorithm publicly accessible.
SequencetoSequence Pretraining with Unified Modality Masking for Visual Document Understanding ; This paper presents GenDoc, a general sequencetosequence document understanding model pretrained with unified masking across three modalities text, image, and layout. The proposed model utilizes an encoderdecoder architecture, which allows for increased adaptability to a wide range of downstream tasks with diverse output formats, in contrast to the encoderonly models commonly employed in document understanding. In addition to the traditional text infilling task used in previous encoderdecoder models, our pretraining extends to include tasks of masked image token prediction and masked layout prediction. We also design modalityspecific instruction and adopt both disentangled attention and the mixtureofmodalityexperts strategy to effectively capture the information leveraged by each modality. Evaluation of the proposed model through extensive experiments on several downstream tasks in document understanding demonstrates its ability to achieve superior or competitive performance compared to stateoftheart approaches. Our analysis further suggests that GenDoc is more robust than the encoderonly models in scenarios where the OCR quality is imperfect.
DiffusionBased MelSpectrogram Enhancement for Personalized Speech Synthesis with Found Data ; Creating synthetic voices with found data is challenging, as realworld recordings often contain various types of audio degradation. One way to address this problem is to preenhance the speech with an enhancement model and then use the enhanced data for texttospeech TTS model training. This paper investigates the use of conditional diffusion models for generalized speech enhancement, which aims at addressing multiple types of audio degradation simultaneously. The enhancement is performed on the log Melspectrogram domain to align with the TTS training objective. Text information is introduced as an additional condition to improve the model robustness. Experiments on realworld recordings demonstrate that the synthetic voice built on data enhanced by the proposed model produces higherquality synthetic speech, compared to those trained on data enhanced by strong baselines. Code and pretrained parameters of the proposed enhancement model are available at urlhttpsgithub.comdmse4ttsDMSE4TTS
MRIDM Merge Reactive Intelligent Driver Model Towards Enhancing Laterally Aware Carfollowing Models ; This paper discusses the limitations of existing microscopic traffic models in accounting for the potential impacts of onramp vehicles on the carfollowing behavior of mainlane vehicles on highways. We first surveyed U.S. onramps to choose a representative set of onramps and then collected realworld observational data from the merging vehicle's perspective in various traffic conditions ranging from freeflowing to rushhour traffic jams. Next, as our core contribution, we introduce a novel carfollowing model, called MRIDM, for highway driving that reacts to merging vehicles in a realistic way. This proposed driving model can either be used in traffic simulators to generate realistic highway driving behavior or integrated into a prediction module for autonomous vehicles attempting to merge onto the highway. We quantitatively evaluated the effectiveness of our model and compared it against several other methods. We show that MRIDM has the least error in mimicking the realworld data, while having features such as smoothness, stability, and lateral awareness.
NonAutoregressive DocumentLevel Machine Translation NADMT Exploring Effective Approaches, Challenges, and Opportunities ; Nonautoregressive translation NAT models have been extensively investigated within the context of sentencelevel machine translation MT tasks, demonstrating comparable quality and superior translation speed when contrasted with autoregressive translation AT models. However, the challenges associated with multimodality and alignment issues within NAT models become more prominent when increasing input and output length, leading to unexpected complications in documentlevel MT. In this paper, we conduct a comprehensive examination of typical NAT models in the context of documentlevel MT tasks. Experiments reveal that, although NAT models significantly accelerate text generation on documents, they do not perform as effectively as on sentences. To bridge this performance gap, we introduce a novel design that underscores the importance of sentencelevel alignment for nonautoregressive documentlevel machine translation NADMT. This innovation substantially reduces the performance discrepancy. However, it is worth noting that NADMT models are still far from perfect and may necessitate additional research to fully optimize their performance. We delve into the related opportunities and challenges and provide our code at httpsgithub.combaoguangshengnatondoc to stimulate further research in this field.
Should We Attend More or Less Modulating Attention for Fairness ; The abundance of annotated data in natural language processing NLP poses both opportunities and challenges. While it enables the development of highperforming models for a variety of tasks, it also poses the risk of models learning harmful biases from the data, such as gender stereotypes. In this work, we investigate the role of attention, a widelyused technique in current stateoftheart NLP models, in the propagation of social biases. Specifically, we study the relationship between the entropy of the attention distribution and the model's performance and fairness. We then propose a novel method for modulating attention weights to improve model fairness after training. Since our method is only applied posttraining and preinference, it is an intraprocessing method and is, therefore, less computationally expensive than existing inprocessing and preprocessing approaches. Our results show an increase in fairness and minimal performance loss on different text classification and generation tasks using language models of varying sizes. WARNING This work uses language that is offensive.
Nonparametric, Nearestneighborassisted Finetuning for Neural Machine Translation ; Nonparametric, knearestneighbor algorithms have recently made inroads to assist generative models such as language models and machine translation decoders. We explore whether such nonparametric models can improve machine translation models at the finetuning stage by incorporating statistics from the kNN predictions to inform the gradient updates for a baseline translation model. There are multiple methods which could be used to incorporate kNN statistics and we investigate gradient scaling by a gating mechanism, the kNN's ground truth probability, and reinforcement learning. For four standard indomain machine translation datasets, compared with classic finetuning, we report consistent improvements of all of the three methods by as much as 1.45 BLEU and 1.28 BLEU for GermanEnglish and EnglishGerman translations respectively. Through qualitative analysis, we found particular improvements when it comes to translating grammatical relations or function words, which results in increased fluency of our model.
Do All Languages Cost the Same Tokenization in the Era of Commercial Language Models ; Language models have graduated from being research prototypes to commercialized products offered as web APIs, and recent works have highlighted the multilingual capabilities of these products. The API vendors charge their users based on usage, more specifically on the number of tokens'' processed or generated by the underlying language models. What constitutes a token, however, is training data and model dependent with a large variance in the number of tokens required to convey the same information in different languages. In this work, we analyze the effect of this nonuniformity on the fairness of an API's pricing policy across languages. We conduct a systematic analysis of the cost and utility of OpenAI's language model API on multilingual benchmarks in 22 typologically diverse languages. We show evidence that speakers of a large number of the supported languages are overcharged while obtaining poorer results. These speakers tend to also come from regions where the APIs are less affordable to begin with. Through these analyses, we aim to increase transparency around language model APIs' pricing policies and encourage the vendors to make them more equitable.
PaD Programaided Distillation Specializes Large Models in Reasoning ; While Large Language Models LLMs excel in several natural language processing tasks, their size and inaccessibility present challenges for extensive practical application. Previous studies acquire specialized skills through distillation on LLMs, which result in trading generic abilities, called model specialization. As for reasoning ability, chainofthought was synthesized to subsequent distillation. However, due to hallucination, synthetic chainofthought from LLMs contains faulty reasoning. These incorrect reasoning steps damage the reasoning capability. To tackle above issues, we propose Programaided Distillation PaD, which distills LLMs to obtain specialized small models in reasoning tasks. In PaD, we strengthen specialized models with programaided reasoning, and help them overcome faulty reasoning steps with automated error checking. Experimental results demonstrate that, on the GSM8K benchmark, a 0.06B model using PaD can not only outperform certain LLMs e.g., LLaMA, but also achieves a 10 improvement over baselines with a significantly smaller scale of parameters and data. Data pruning analysis reveals that PaD possesses higher training efficiency.
RLBoost Boosting Supervised Models using Deep Reinforcement Learning ; Data quality or data evaluation is sometimes a task as important as collecting a large volume of data when it comes to generating accurate artificial intelligence models. In fact, being able to evaluate the data can lead to a larger database that is better suited to a particular problem because we have the ability to filter out data obtained automatically of dubious quality. In this paper we present RLBoost, an algorithm that uses deep reinforcement learning strategies to evaluate a particular dataset and obtain a model capable of estimating the quality of any new data in order to improve the final predictive quality of a supervised learning model. This solution has the advantage that of being agnostic regarding the supervised model used and, through multiattention strategies, takes into account the data in its context and not only individually. The results of the article show that this model obtains better and more stable results than other stateoftheart algorithms such as LOO, DataShapley or DVRL.
CoLearning Empirical Games and World Models ; Gamebased decisionmaking involves reasoning over both world dynamics and strategic interactions among the agents. Typically, empirical models capturing these respective aspects are learned and used separately. We investigate the potential gain from colearning these elements a world model for dynamics and an empirical game for strategic interactions. Empirical games drive world models toward a broader consideration of possible game dynamics induced by a diversity of strategy profiles. Conversely, world models guide empirical games to efficiently discover new strategies through planning. We demonstrate these benefits first independently, then in combination as realized by a new algorithm, DynaPSRO, that colearns an empirical game and a world model. When compared to PSRO a baseline empiricalgame building algorithm, DynaPSRO is found to compute lower regret solutions on partially observable generalsum games. In our experiments, DynaPSRO also requires substantially fewer experiences than PSRO, a key algorithmic advantage for settings where collecting playergame interaction data is a costlimiting factor.
Emergent inabilities Inverse scaling over the course of pretraining ; Does inverse scaling only occur as a function of model parameter size, or can it also occur over the course of training We carry out an exploratory study investigating whether, over the course of training on the language modeling task, the performance of language models at specific tasks can decrease while general performance remains high. We find that for two tasks from the Inverse Scaling Challenge quoterepetition and redefinemath this is indeed the case. Specifically, we find that for Pythia Biderman et al., 2023 models with a higher number of parameters, performance decreases over the course of training at these two tasks, despite these models showing standard positive scaling overall. This highlights the importance of testing model performance at all relevant benchmarks any time they are trained on additional data, even if their overall performance improves.
Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling ; Definition modeling is an important task in advanced natural language applications such as understanding and conversation. Since its introduction, it focus on generating one definition for a target word or phrase in a given context, which we refer to as Single Definition Modeling SDM. However, this approach does not adequately model the correlations and patterns among different contexts and definitions of words. In addition, the creation of a training dataset for SDM requires significant human expertise and effort. In this paper, we carefully design a new task called Multiple Definition Modeling MDM that pool together all contexts and definition of target words. We demonstrate the ease of creating a model as well as multiple training sets automatically. In the experiments, we demonstrate and analyze the benefits of MDM, including improving SDM's performance by using MDM as the pretraining task and its comparable performance in the zeroshot setting.
PLCMOS a datadriven nonintrusive metric for the evaluation of packet loss concealment algorithms ; Speech quality assessment is a problem for every researcher working on models that produce or process speech. Human subjective ratings, the gold standard in speech quality assessment, are expensive and timeconsuming to acquire in a quantity that is sufficient to get reliable data, while automated objective metrics show a low correlation with gold standard ratings. This paper presents PLCMOS, a nonintrusive datadriven tool for generating a robust, accurate estimate of the mean opinion score a human rater would assign an audio file that has been processed by being transmitted over a degraded packetswitched network with missing packets being healed by a packet loss concealment algorithm. Our new model shows a modelwise Pearson's correlation of 0.97 and rank correlation of 0.95 with human ratings, substantially above all other available intrusive and nonintrusive metrics. The model is released as an ONNX model for other researchers to use when building PLC systems.
LMs with a Voice Spoken Language Modeling beyond Speech Tokens ; We present SPECTRON, a novel approach to adapting pretrained language models LMs to perform speech continuation. By leveraging pretrained speech encoders, our model generates both text and speech outputs with the entire system being trained endtoend operating directly on spectrograms. Training the entire model in the spectrogram domain simplifies our speech continuation system versus existing cascade methods which use discrete speech representations. We further show our method surpasses existing spoken language models both in semantic content and speaker preservation while also benefiting from the knowledge transferred from preexisting models. Audio samples can be found in our website httpsmichelleramanovich.github.iospectronspectron
Gorilla Large Language Model Connected with Massive APIs ; Large Language Models LLMs have seen an impressive wave of advances recently, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis. However, their potential to effectively use tools via API calls remains unfulfilled. This is a challenging task even for today's stateoftheart LLMs such as GPT4, largely due to their inability to generate accurate input arguments and their tendency to hallucinate the wrong usage of an API call. We release Gorilla, a finetuned LLaMAbased model that surpasses the performance of GPT4 on writing API calls. When combined with a document retriever, Gorilla demonstrates a strong capability to adapt to testtime document changes, enabling flexible user updates or version changes. It also substantially mitigates the issue of hallucination, commonly encountered when prompting LLMs directly. To evaluate the model's ability, we introduce APIBench, a comprehensive dataset consisting of HuggingFace, TorchHub, and TensorHub APIs. The successful integration of the retrieval system with Gorilla demonstrates the potential for LLMs to use tools more accurately, keep up with frequently updated documentation, and consequently increase the reliability and applicability of their outputs. Gorilla's code, model, data, and demo are available at httpsgorilla.cs.berkeley.edu
Large population limit for a multilayer SIR model including households and workplaces ; We study a multilayer SIR model with two levels of mixing, namely a global level which is uniformly mixing, and a local level with two layers distinguishing household and workplace contacts, respectively. We establish the large population convergence of the corresponding stochastic process. For this purpose, we use an individualbased model whose state space explicitly takes into account the duration of infectious periods. This allows to deal with the natural correlation of the epidemic states of individuals whose household and workplace share a common infected. In a general setting where a nonexponential distribution of infectious periods may be considered, convergence to the unique deterministic solution of a measurevalued equation is obtained. In the particular case of exponentially distributed infectious periods, we show that it is possible to further reduce the obtained deterministic limit, leading to a closed, finite dimensional dynamical system capturing the epidemic dynamics. This model reduction subsequently is studied from a numerical point of view. We illustrate that the dynamical system derived from the large population approximation is a pertinent model reduction when compared to simulations of the stochastic process or to an alternative edgebased compartmental model, both in terms of accuracy and computational cost.
Short Answer Grading Using Oneshot Prompting and Text Similarity Scoring Model ; In this study, we developed an automated short answer grading ASAG model that provided both analytic scores and final holistic scores. Short answer items typically consist of multiple subquestions, and providing an analytic score and the text span relevant to each subquestion can increase the interpretability of the automated scores. Furthermore, they can be used to generate actionable feedback for students. Despite these advantages, most studies have focused on predicting only holistic scores due to the difficulty in constructing dataset with manual annotations. To address this difficulty, we used large language model LLMbased oneshot prompting and a text similarity scoring model with domain adaptation using small manually annotated dataset. The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a subset of the publicly available ASAG dataset. The model achieved a substantial improvement over the majority baseline.
Plugin Performative Optimization ; When predictions are performative, the choice of which predictor to deploy influences the distribution of future observations. The overarching goal in learning under performativity is to find a predictor that has low emphperformative risk, that is, good performance on its induced distribution. One family of solutions for optimizing the performative risk, including bandits and other derivativefree methods, is agnostic to any structure in the performative feedback, leading to exceedingly slow convergence rates. A complementary family of solutions makes use of explicit emphmodels for the feedback, such as bestresponse models in strategic classification, enabling significantly faster rates. However, these rates critically rely on the feedback model being wellspecified. In this work we initiate a study of the use of possibly emphmisspecified models in performative prediction. We study a general protocol for making use of models, called emphplugin performative optimization, and prove bounds on its excess risk. We show that plugin performative optimization can be far more efficient than modelagnostic strategies, as long as the misspecification is not too extreme. Altogether, our results support the hypothesis that modelseven if misspecifiedcan indeed help with learning in performative settings.
MiniSUPERB Lightweight Benchmark for Selfsupervised Speech Models ; Selfsupervised learning SSL is a popular research topic in speech processing. Successful SSL speech models must generalize well. SUPERB was proposed to evaluate the ability of SSL speech models across many speech tasks. However, due to the diversity of tasks, the evaluation process requires huge computational costs. We present MiniSUPERB, a lightweight benchmark that efficiently evaluates SSL speech models with comparable results to SUPERB while greatly reducing the computational cost. We select representative tasks and sample datasets and extract model representation offline, achieving 0.954 and 0.982 Spearman's rank correlation with SUPERB Paper and SUPERB Challenge, respectively. In the meanwhile, the computational cost is reduced by 97 in regard to MACs number of MultiplyACcumulate operations in the tasks we choose. To the best of our knowledge, this is the first study to examine not only the computational cost of a model itself but the cost of evaluating it on a benchmark.
Efficient Training of EnergyBased Models Using Jarzynski Equality ; Energybased models EBMs are generative models inspired by statistical physics with a wide range of applications in unsupervised learning. Their performance is best measured by the crossentropy CE of the model distribution relative to the data distribution. Using the CE as the objective for training is however challenging because the computation of its gradient with respect to the model parameters requires sampling the model distribution. Here we show how results for nonequilibrium thermodynamics based on Jarzynski equality together with tools from sequential MonteCarlo sampling can be used to perform this computation efficiently and avoid the uncontrolled approximations made using the standard contrastive divergence algorithm. Specifically, we introduce a modification of the unadjusted Langevin algorithm ULA in which each walker acquires a weight that enables the estimation of the gradient of the crossentropy at any step during GD, thereby bypassing sampling biases induced by slow mixing of ULA. We illustrate these results with numerical experiments on Gaussian mixture distributions as well as the MNIST dataset. We show that the proposed approach outperforms methods based on the contrastive divergence algorithm in all the considered situations.
Understanding and Mitigating Copying in Diffusion Models ; Images generated by diffusion models like Stable Diffusion are increasingly widespread. Recent works and even lawsuits have shown that these models are prone to replicating their training data, unbeknownst to the user. In this paper, we first analyze this memorization problem in texttoimage diffusion models. While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, we observe that the text conditioning of the model plays a similarly important role. In fact, we see in our experiments that data replication often does not happen for unconditional models, while it is common in the textconditional case. Motivated by our findings, we then propose several techniques for reducing data replication at both training and inference time by randomizing and augmenting image captions in the training set.
Assessing the Generalizability of a Performance Predictive Model ; A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning ML methods is a goodperforming predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.
Discrete qexponential limit order cancellation time distribution ; Identifying the best possible models based on given empirical data of observed time series is challenging. The financial markets provide us with vast empirical data, but the best model selection is still problematic for researchers. The widely used longrange memory and selfsimilarity estimators give varying values of the parameters as these estimators are developed for specific time series models. Previously we investigated the order disbalance time series from the general fractional L'evy stable motion perspective and discovered the stable anticorrelation in the order flow of financial markets. Nevertheless, a more detailed consideration of empirical data suggests we construct a more specific order flow model based on the powerlaw distribution of limit order cancellation times. In the event time consideration, the limit order cancellation times follow the discrete probability mass function derived from the Tsallis qExponential distribution. The powerlaw distribution of the limit order volumes and powerlaw cancellation times form the new approach to modeling order disbalance in the financial markets. Proposed modeling can be an example of opinion dynamics in social systems.
Exposing Attention Glitches with FlipFlop Language Modeling ; Why do large language models sometimes output factual inaccuracies and exhibit erroneous reasoning The brittleness of these models, particularly when executing long chains of reasoning, currently seems to be an inevitable price to pay for their advanced capabilities of coherently synthesizing knowledge, pragmatics, and abstract thought. Towards making sense of this fundamentally unsolved problem, this work identifies and analyzes the phenomenon of attention glitches, in which the Transformer architecture's inductive biases intermittently fail to capture robust reasoning. To isolate the issue, we introduce flipflop language modeling FFLM, a parametric family of synthetic benchmarks designed to probe the extrapolative behavior of neural language models. This simple generative task requires a model to copy binary symbols over longrange dependencies, ignoring the tokens in between. We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques. Our preliminary mechanistic analyses show why the remaining errors may be very difficult to diagnose and resolve. We hypothesize that attention glitches account for some of the closeddomain hallucinations in natural LLMs.
Comparative Study on the Effects of Noise in MLBased Anxiety Detection ; Wearable health devices are ushering in a new age of continuous and noninvasive remote monitoring. One application of this technology is in anxiety detection. Many advancements in anxiety detection have happened in controlled lab settings, but noise prevents these advancements from generalizing to realworld conditions. We seek to progress the field by studying how noise impacts model performance and developing models that are robust to noisy, realworld conditions and, hence, attuned to the commotion of everyday life. In this study we look to investigate why and how previous methods have failed. Using the wearable stress and affect detection WESAD dataset, we compare the effect of various intensities of noise on machine learning models classifying levels of physiological arousal in the threeclass classification problem baseline vs. stress vs. amusement. Before introducing noise, our baseline model performance reaches 98.7, compared to Schmidt 2018's 80.3. We discuss potential sources of this discrepancy in results through a careful evaluation of feature extraction and model architecture choices. Finally, after the introduction of noise, we provide a thorough analysis of the effect of noise on each model architecture.
The RefinedWeb Dataset for Falcon LLM Outperforming Curated Corpora with Web Data, and Web Data Only ; Large language models are commonly trained on a mixture of filtered web data and curated highquality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zeroshot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique highquality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the stateoftheart trained on The Pile. Despite extensive filtering, the highquality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our RefinedWeb dataset, and 1.37.5B parameters language models trained on it.
Optimal neighbourhood selection in structural equation models ; We study the optimal sample complexity of neighbourhood selection in linear structural equation models, and compare this to best subset selection BSS for linear models under general design. We show by example that even when the structure is emphunknown the existence of underlying structure can reduce the sample complexity of neighbourhood selection. This result is complicated by the possibility of path cancellation, which we study in detail, and show that improvements are still possible in the presence of path cancellation. Finally, we support these theoretical observations with experiments. The proof introduces a modified BSS estimator, called klBSS, and compares its performance to BSS. The analysis of klBSS may also be of independent interest since it applies to arbitrary structured models, not necessarily those induced by a structural equation model. Our results have implications for structure learning in graphical models, which often relies on neighbourhood selection as a subroutine.
Vid2Act Activate Offline Videos for Visual RL ; Pretraining RL models on offline video datasets is a promising way to improve their training efficiency in online tasks, but challenging due to the inherent mismatch in tasks, dynamics, and behaviors across domains. A recent model, APV, sidesteps the accompanied action records in offline datasets and instead focuses on pretraining a taskirrelevant, actionfree world model within the source domains. We present Vid2Act, a modelbased RL method that learns to transfer valuable actionconditioned dynamics and potentially useful action demonstrations from offline to online settings. The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the domain relevance for both dynamics representation transfer and policy transfer. Specifically, we train the world models to generate a set of timevarying task similarities using a domainselective knowledge distillation loss. These similarities serve two purposes i adaptively transferring the most useful source knowledge to facilitate dynamics learning, and ii learning to replay the most relevant source actions to guide the target policy. We demonstrate the advantages of Vid2Act over the actionfree visual RL pretraining method in both MetaWorld and DeepMind Control Suite.
Unleashing Mask Explore the Intrinsic OutofDistribution Detection Capability ; Outofdistribution OOD detection is an indispensable aspect of secure AI when deploying machine learning models in realworld applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on indistribution ID data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical datalevel attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the welltrained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at httpsgithub.comtmlrgroupUnleashingMask.
Conditional Diffusion Models for Weakly Supervised Medical Image Segmentation ; Recent advances in denoising diffusion probabilistic models have shown great success in image synthesis tasks. While there are already works exploring the potential of this powerful tool in image semantic segmentation, its application in weakly supervised semantic segmentation WSSS remains relatively underexplored. Observing that conditional diffusion models CDM is capable of generating images subject to specific distributions, in this work, we utilize categoryaware semantic information underlied in CDM to get the prediction mask of the target object with only imagelevel annotations. More specifically, we locate the desired class by approximating the derivative of the output of CDM w.r.t the input condition. Our method is different from previous diffusion model methods with guidance from an external classifier, which accumulates noises in the background during the reconstruction process. Our method outperforms stateoftheart CAM and diffusion model methods on two public medical image segmentation datasets, which demonstrates that CDM is a promising tool in WSSS. Also, experiment shows our method is more timeefficient than existing diffusion model methods, making it practical for wider applications.
Guiding The Last Layer in Federated Learning with PreTrained Models ; Federated Learning FL is an emerging paradigm that allows a model to be trained across a number of participants without sharing data. Recent works have begun to consider the effects of using pretrained models as an initialization point for existing FL algorithms; however, these approaches ignore the vast body of efficient transfer learning literature from the centralized learning setting. Here we revisit the problem of FL from a pretrained model considered in prior work and expand it to a set of computer vision transfer learning problems. We first observe that simply fitting a linear classification head can be efficient and effective in many cases. We then show that in the FL setting, fitting a classifier using the Nearest Class Means NCM can be done exactly and orders of magnitude more efficiently than existing proposals, while obtaining strong performance. Finally, we demonstrate that using a twophase approach of obtaining the classifier and then finetuning the model can yield rapid convergence and improved generalization in the federated setting. We demonstrate the potential our method has to reduce communication and compute costs while achieving better model performance.
Effective Neural Topic Modeling with Embedding Clustering Regularization ; Topic models have been prevalent for decades with various applications. However, existing topic models commonly suffer from the notorious topic collapsing discovered topics semantically collapse towards each other, leading to highly repetitive topics, insufficient topic discovery, and damaged model interpretability. In this paper, we propose a new neural topic model, Embedding Clustering Regularization Topic Model ECRTM. Besides the existing reconstruction error, we propose a novel Embedding Clustering Regularization ECR, which forces each topic embedding to be the center of a separately aggregated word embedding cluster in the semantic space. This enables each produced topic to contain distinct word semantics, which alleviates topic collapsing. Regularized by ECR, our ECRTM generates diverse and coherent topics together with highquality topic distributions of documents. Extensive experiments on benchmark datasets demonstrate that ECRTM effectively addresses the topic collapsing issue and consistently surpasses stateoftheart baselines in terms of topic quality, topic distributions of documents, and downstream classification tasks.
A Threat Model for Soft Privacy on Smart Cars ; Modern cars are getting so computerised that ENISA's phrase smart cars is a perfect fit. The amount of personal data that they process is very large and, yet, increasing. Hence, the need to address citizens' privacy while they drive and, correspondingly, the importance of privacy threat modelling in support of a respective risk assessment, such as through a Data Protection Impact Assessment. This paper addresses privacy threats by advancing a general modelling methodology and by demonstrating it specifically on soft privacy, which ensures citizens' full control on their personal data. By considering all relevant threat agents, the paper applies the methodology to the specific automotive domain while keeping threats at the same level of detail as ENISA's. The main result beside the modelling methodology consists of both domainindependent and automotive domaindependent soft privacy threats. While cybersecurity has been vastly threatmodelled so far, this paper extends the literature with a threat model for soft privacy on smart cars, producing 17 domainindependent threats that, associated with 41 domainspecific assets, shape a novel set of domaindependent threats in automotive.
Transfer Learning from Pretrained Language Models Improves EndtoEnd Speech Summarization ; Endtoend speech summarization E2E SSum directly summarizes input speech into easytoread short sentences with a single model. This approach is promising because it, in contrast to the conventional cascade approach, can utilize full acoustical information and mitigate to the propagation of transcription errors. However, due to the high cost of collecting speechsummary pairs, an E2E SSum model tends to suffer from training data scarcity and output unnatural sentences. To overcome this drawback, we propose for the first time to integrate a pretrained language model LM, which is highly capable of generating natural sentences, into the E2E SSum decoder via transfer learning. In addition, to reduce the gap between the independently pretrained encoder and decoder, we also propose to transfer the baseline E2E SSum encoder instead of the commonly used automatic speech recognition encoder. Experimental results show that the proposed model outperforms baseline and data augmented models.
MobileNMT Enabling Translation in 15MB and 30ms ; Deploying NMT models on mobile devices is essential for privacy, low latency, and offline scenarios. For high model capacity, NMT models are rather large. Running these models on devices is challenging with limited storage, memory, computation, and power consumption. Existing work either only focuses on a single metric such as FLOPs or general engine which is not good at autoregressive decoding. In this paper, we present MobileNMT, a system that can translate in 15MB and 30ms on devices. We propose a series of principles for model compression when combined with quantization. Further, we implement an engine that is friendly to INT8 and decoding. With the codesign of model and engine, compared with the existing system, we speed up 47.0x and save 99.5 of memory with only 11.6 loss of BLEU. The code is publicly available at httpsgithub.comzjerseyLightseqARM.
Unlocking Foundation Models for PrivacyEnhancing Speech Understanding An Early Study on Low Resource Speech Training Leveraging Labelguided Synthetic Speech Content ; Automatic Speech Understanding ASU leverages the power of deep learning models for accurate interpretation of human speech, leading to a wide range of speech applications that enrich the human experience. However, training a robust ASU model requires the curation of a large number of speech samples, creating risks for privacy breaches. In this work, we investigate using foundation models to assist privacyenhancing speech computing. Unlike conventional works focusing primarily on data perturbation or distributed algorithms, our work studies the possibilities of using pretrained generative models to synthesize speech content as training data with just label guidance. We show that zeroshot learning with training labelguided synthetic speech content remains a challenging task. On the other hand, our results demonstrate that the model trained with synthetic speech samples provides an effective initialization point for lowresource ASU training. This result reveals the potential to enhance privacy by reducing user data collection but using labelguided synthetic speech content.
Constraint programming models for depthoptimal qubit assignment and SWAPbased routing ; Due to the limited connectivity of gate model quantum devices, logical quantum circuits must be compiled to target hardware before they can be executed. Often, this process involves the insertion of SWAP gates into the logical circuit, usually increasing the depth of the circuit, achieved by solving a socalled qubit assignment and routing problem. Recently, a number of integer linear programming ILP models have been proposed for solving the qubit assignment and routing problem to proven optimality. These models encode the objective function and constraints of the problem, and leverage the use of automated solver technology to find hardwarecompliant quantum circuits. In this work, we propose constraint programming CP models for this problem and compare their performance against ILP for circuit depth minimization for both linear and twodimensional grid lattice device topologies on a set of randomly generated instances. Our empirical analysis indicates that the proposed CP approaches outperform the ILP models both in terms of solution quality and runtime.
A Mechanistic Transform Model for Synthesizing Eye Movement Data with Improved Realism ; This manuscript demonstrates an improved modelbased approach for synthetic degradation of previously captured eye movement signals. Signals recorded on a highquality eye tracking sensor are transformed such that their resulting eye tracking signal quality is similar to recordings captured on a lowquality target device. The proposed model improves the realism of the degraded signals versus prior approaches by introducing a mechanism for degrading spatial accuracy and temporal precision. Moreover, a percentilematching technique is demonstrated for mimicking the relative distributional structure of the signal quality characteristics of the target data set. The model is demonstrated to improve realism on a perfeature and perrecording basis using data from an EyeLink 1000 eye tracker and an SMI eye tracker embedded within a virtual reality platform. The model improves the median classification accuracy performance metric by 35.7 versus the benchmark model towards the ideal metric of 50. This paper also expands the literature by providing an applicationagnostic realism assessment workflow for synthetically generated eye movement signals.
MultiObject Manipulation via ObjectCentric Neural Scattering Functions ; Learned visual dynamics models have proven effective for robotic manipulation tasks. Yet, it remains unclear how best to represent scenes involving multiobject interactions. Current methods decompose a scene into discrete objects, but they struggle with precise modeling and manipulation amid challenging lighting conditions as they only encode appearance tied with specific illuminations. In this work, we propose using objectcentric neural scattering functions OSFs as object representations in a modelpredictive control framework. OSFs model perobject light transport, enabling compositional scene rerendering under object rearrangement and varying lighting conditions. By combining this approach with inverse parameter estimation and graphbased neural dynamics models, we demonstrate improved modelpredictive control performance and generalization in compositional multiobject environments, even in previously unseen scenarios and harsh lighting conditions.
Undetectable Watermarks for Language Models ; Recent advances in the capabilities of large language models such as GPT4 have spurred increasing concern about our ability to detect AIgenerated text. Prior works have suggested methods of embedding watermarks in model outputs, by noticeably altering the output distribution. We ask Is it possible to introduce a watermark without incurring any detectable change to the output distribution To this end we introduce a cryptographicallyinspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of oneway functions, a standard assumption in cryptography.
BlockState Transformer ; State space models SSMs have shown impressive results on tasks that require modeling longrange dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named BlockState Transformer BST, that internally combines an SSM sublayer for longrange contextualization, and a Block Transformer sublayer for shortterm representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and blockwise attention. We show that our model outperforms similar Transformerbased architectures on language modeling perplexity and generalizes to longer sequences. In addition, the BlockState Transformer demonstrates more than tenfold increase in speed at the layer level compared to the BlockRecurrent Transformer when model parallelization is employed.
FALLE A Foley Sound Synthesis Model and Strategies ; This paper introduces FALLE, a foley synthesis system and its traininginference strategies. The FALLE model employs a cascaded approach comprising lowresolution spectrogram generation, spectrogram superresolution, and a vocoder. We trained every soundrelated model from scratch using our extensive datasets, and utilized a pretrained language model. We conditioned the model with datasetspecific texts, enabling it to learn sound quality and recording environment based on text input. Moreover, we leveraged external language models to improve text descriptions of our datasets and performed prompt engineering for quality, coherence, and diversity. FALLE was evaluated by an objective measure as well as listening tests in the DCASE 2023 challenge Task 7. The submission achieved the second place on average, while achieving the best score for diversity, second place for audio quality, and third place for class fitness.
Extensions to the Guaranteed Service Model for Industrial Applications of MultiEchelon Inventory Optimization ; Multiechelon inventory optimization MEIO plays a key role in a supply chain seeking to achieve specified customer service levels with a minimum capital in inventory. In this work, we propose a generalized MEIO model based on the Guaranteed Service approach to allocate safety stock levels across the network at the lowest holding cost. This model integrates several existing and some novel features that are usually present in pharmaceutical multiechelon supply chains into a single model review periods, manufacturing facilities, hybrid nodes nodes with both internal and external demand, minimum order quantities MOQ, and different service level performance indicators fill rate and cycle service levels. We include a polynomial regression to approximate fill rates as a possible target measure to set safety stocks. To improve efficiency, we propose a nonlinear programming model to support decision making, which can be reformulated as a Quadratically Constrained Program QCP, which leads to order of magnitude reductions in computational time. The performance of the model is evaluated by solving illustrative and realworld cases, and is validated with simulation.
Topological Parallax A Geometric Specification for Deep Perception Models ; For safety and robustness of AI systems, we introduce topological parallax as a theoretical and computational tool that compares a trained model to a reference dataset to determine whether they have similar multiscale geometric structure. Our proofs and examples show that this geometric similarity between dataset and model is essential to trustworthy interpolation and perturbation, and we conjecture that this new concept will add value to the current debate regarding the unclear relationship between overfitting and generalization in applications of deeplearning. In typical DNN applications, an explicit geometric description of the model is impossible, but parallax can estimate topological features components, cycles, voids, etc. in the model by examining the effect on the Rips complex of geodesic distortions using the reference dataset. Thus, parallax indicates whether the model shares similar multiscale geometric features with the dataset. Parallax presents theoretically via topological data analysis TDA as a bifiltered persistence module, and the key properties of this module are stable under perturbation of the reference dataset.
Morphological Inflection with Phonological Features ; Recent years have brought great advances into solving morphological tasks, mostly due to powerful neural models applied to various tasks as reinflection and analysis. Yet, such morphological tasks cannot be considered solved, especially when little training data is available or when generalizing to previously unseen lemmas. This work explores effects on performance obtained through various ways in which morphological models get access to subcharacter phonological features that are the targets of morphological processes. We design two methods to achieve this goal one that leaves models as is but manipulates the data to include features instead of characters, and another that manipulates models to take phonological features into account when building representations for phonemes. We elicit phonemic data from standard graphemic data using languagespecific grammars for languages with shallow graphemetophoneme mapping, and we experiment with two reinflection models over eight languages. Our results show that our methods yield comparable results to the graphemebased baseline overall, with minor improvements in some of the languages. All in all, we conclude that patterns in character distributions are likely to allow models to infer the underlying phonological characteristics, even when phonemes are not explicitly represented.
Resume Information Extraction via PostOCR Text Processing ; Information extraction IE, one of the main tasks of natural language processing NLP, has recently increased importance in the use of resumes. In studies on the text to extract information from the CV, sentence classification was generally made using NLP models. In this study, it is aimed to extract information by classifying all of the text groups after preprocessing such as Optical Character Recognition OCT and object recognition with the YOLOv8 model of the resumes. The text dataset consists of 286 resumes collected for 5 different education, experience, talent, personal and language job descriptions in the IT industry. The dataset created for object recognition consists of 1198 resumes, which were collected from the opensource internet and labeled as sets of text. BERT, BERTt, DistilBERT, RoBERTa and XLNet were used as models. F1 score variances were used to compare the model results. In addition, the YOLOv8 model has also been reported comparatively in itself. As a result of the comparison, DistilBERT was showed better results despite having a lower number of parameters than other models.
PostSelection Inference for the Cox Model with IntervalCensored Data ; We develop a postselection inference method for the Cox proportional hazards model with intervalcensored data, which provides asymptotically valid pvalues and confidence intervals conditional on the model selected by lasso. The method is based on a pivotal quantity that is shown to converge to a uniform distribution under local alternatives. The proof can be adapted to many other regression models, which is illustrated by the extension to generalized linear models and the Cox model with rightcensored data. Our method involves estimation of the efficient information matrix, for which several approaches are proposed with proof of their consistency. Thorough simulation studies show that our method has satisfactory performance in samples of modest sizes. The utility of the method is illustrated via an application to an Alzheimer's disease study.
RobuT A Systematic Study of Table QA Robustness Against HumanAnnotated Adversarial Perturbations ; Despite significant progress having been made in question answering on tabular data Table QA, it's unclear whether, and to what extent existing Table QA models are robust to taskspecific perturbations, e.g., replacing key question entities or shuffling table columns. To systematically study the robustness of Table QA models, we propose a benchmark called RobuT, which builds upon existing Table QA datasets WTQ, WikiSQLWeak, and SQA and includes humanannotated adversarial perturbations in terms of table header, table content, and question. Our results indicate that both stateoftheart Table QA models and large language models e.g., GPT3 with fewshot learning falter in these adversarial sets. We propose to address this problem by using large language models to generate adversarial examples to enhance training, which significantly improves the robustness of Table QA models. Our data and code is publicly available at httpsgithub.comyilunzhaoRobuT.
SugarCrepe Fixing Hackable Benchmarks for VisionLanguage Compositionality ; In the last year alone, a surge of new benchmarks to measure compositional understanding of visionlanguage models have permeated the machine learning ecosystem. Given an image, these benchmarks probe a model's ability to identify its associated caption amongst a set of compositional distractors. Surprisingly, we find significant biases in all these benchmarks rendering them hackable. This hackability is so dire that blind models with no access to the image outperform stateoftheart visionlanguage models. To remedy this rampant vulnerability, we introduce SugarCrepe, a new benchmark for visionlanguage compositionality evaluation. We employ large language models, instead of rulebased templates used in previous benchmarks, to generate fluent and sensical hard negatives, and utilize an adversarial refinement mechanism to maximally reduce biases. We reevaluate stateoftheart models and recently proposed compositionality inducing strategies, and find that their improvements were hugely overestimated, suggesting that more innovation is needed in this important direction. We release SugarCrepe and the code for evaluation at httpsgithub.comRAIVNLabsugarcrepe.
An information projection approach to robust propensity score estimation under missing at random ; Missing data is frequently encountered in many areas of statistics. Propensity score weighting is a popular method for handling missing data. The propensity score method employs a response propensity model, but correct specification of the statistical model can be challenging in the presence of missing data. Doubly robust estimation is attractive, as the consistency of the estimator is guaranteed when either the outcome regression model or the propensity score model is correctly specified. In this paper, we first employ information projection to develop an efficient and doubly robust estimator under indirect model calibration constraints. The resulting propensity score estimator can be equivalently expressed as a doubly robust regression imputation estimator by imposing the internal bias calibration condition in estimating the regression parameters. In addition, we generalize the information projection to allow for outlierrobust estimation. Some asymptotic properties are presented. The simulation study confirms that the proposed method allows robust inference against not only the violation of various model assumptions, but also outliers. A reallife application is presented using data from the Conservation Effects Assessment Project.
Variational latent discrete representation for time series modelling ; Discrete latent space models have recently achieved performance on par with their continuous counterparts in deep variational inference. While they still face various implementation challenges, these models offer the opportunity for a better interpretation of latent spaces, as well as a more direct representation of naturally discrete phenomena. Most recent approaches propose to train separately very highdimensional prior models on the discrete latent data which is a challenging task on its own. In this paper, we introduce a latent data model where the discrete state is a Markov chain, which allows fast endtoend training. The performance of our generative model is assessed on a building management dataset and on the publicly available Electricity Transformer Dataset.
Understanding Social Reasoning in Language Models with Language Models ; As Large Language Models LLMs become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the TheoryofMind ToM reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges 1 the presence of inconsistent results from previous evaluations, and 2 concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark BigToM for LLMs which consists of 25 controls and 5,000 modelwritten evaluations. We find that human participants rate the quality of our benchmark higher than previous crowdsourced evaluations and comparable to expertwritten evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.
Incremental Learning on Food Instance Segmentation ; Food instance segmentation is essential to estimate the serving size of dishes in a food image. The recent cuttingedge techniques for instance segmentation are deep learning networks with impressive segmentation quality and fast computation. Nonetheless, they are hungry for data and expensive for annotation. This paper proposes an incremental learning framework to optimize the model performance given a limited data labelling budget. The power of the framework is a novel difficulty assessment model, which forecasts how challenging an unlabelled sample is to the latest trained instance segmentation model. The data collection procedure is divided into several stages, each in which a new sample package is collected. The framework allocates the labelling budget to the most difficult samples. The unlabelled samples that meet a certain qualification from the assessment model are used to generate pseudolabels. Eventually, the manual labels and pseudolabels are sent to the training data to improve the instance segmentation model. On four largescale food datasets, our proposed framework outperforms current incremental learning benchmarks and achieves competitive performance with the model trained on fully annotated samples.
Deep learning for postprocessing global probabilistic forecasts on subseasonal time scales ; Subseasonal weather forecasts are becoming increasingly important for a range of socioeconomic activities. However, the predictive ability of physical weather models is very limited on these time scales. We propose several postprocessing methods based on convolutional neural networks to improve subseasonal forecasts by correcting systematic errors of numerical weather prediction models. Our postprocessing models operate directly on spatial input fields and are therefore able to retain spatial relationships and to generate spatially homogeneous predictions. They produce global probabilistic tercile forecasts for biweekly aggregates of temperature and precipitation for weeks 34 and 56. In a case study based on a public forecasting challenge organized by the World Meteorological Organization, our postprocessing models outperform recalibrated forecasts from the European Centre for MediumRange Weather Forecasts ECMWF, and achieve improvements over climatological forecasts for all considered variables and lead times. We compare several model architectures and training modes and demonstrate that all approaches lead to skillful and wellcalibrated probabilistic forecasts. The good calibration of the postprocessed forecasts emphasizes that our postprocessing models reliably quantify the forecast uncertainty based on deterministic input information in form of the ECMWF ensemble mean forecast fields only.
S.T.A.R.Track Latent Motion Models for EndtoEnd 3D Object Tracking with Adaptive SpatioTemporal Appearance Representations ; Following the trackingbyattention paradigm, this paper introduces an objectcentric, transformerbased framework for tracking in 3D. Traditional modelbased tracking approaches incorporate the geometric effect of object and ego motion between frames with a geometric motion model. Inspired by this, we propose S.T.A.R.Track, which uses a novel latent motion model LMM to additionally adjust object queries to account for changes in viewing direction and lighting conditions directly in the latent space, while still modeling the geometric motion explicitly. Combined with a novel learnable track embedding that aids in modeling the existence probability of tracks, this results in a generic tracking framework that can be integrated with any querybased detector. Extensive experiments on the nuScenes benchmark demonstrate the benefits of our approach, showing stateoftheart performance for DETR3Dbased trackers while drastically reducing the number of identity switches of tracks at the same time.
Bayesian Hierarchical Modeling and Inference for Mechanistic Systems in Industrial Hygiene ; A series of experiments in stationary and moving passenger rail cars were conducted to measure removal rates of particles in the size ranges of SARSCoV2 viral aerosols, and the air changes per hour provided by existing and modified air handling systems. Such methods for exposure assessments are customarily based on mechanistic models derived from physical laws of particle movement that are deterministic and do not account for measurement errors inherent in data collection. The resulting analysis compromises on reliably learning about mechanistic factors such as ventilation rates, aerosol generation rates and filtration efficiencies from field measurements. This manuscript develops a Bayesian state space modeling framework that synthesizes information from the mechanistic system as well as the field data. We derive a stochastic model from finite difference approximations of differential equations explaining particle concentrations. Our inferential framework trains the mechanistic system using the field measurements from the chamber experiments and delivers reliable estimates of the underlying physical process with fully modelbased uncertainty quantification. Our application falls within the realm of Bayesian melding'' of mechanistic and statistical models and is of significant relevance to environmental hygienists and public health researchers working on assessing performance of aerosol removal rates for rail car fleets.
SSP SelfSupervised Posttraining for Conversational Search ; Conversational search has been regarded as the nextgeneration search paradigm. Constrained by data scarcity, most existing methods distill the welltrained adhoc retriever to the conversational retriever. However, these methods, which usually initialize parameters by query reformulation to discover contextualized dependency, have trouble in understanding the dialogue structure information and struggle with contextual semantic vanishing. In this paper, we propose fullmodel model which is a new posttraining paradigm with three selfsupervised tasks to efficiently initialize the conversational search model to enhance the dialogue structure and contextual semantic understanding. Furthermore, the model can be plugged into most of the existing conversational models to boost their performance. To verify the effectiveness of our proposed method, we apply the conversational encoder posttrained by model on the conversational search task using two benchmark datasets CAsT19 and CAsT20. Extensive experiments that our model can boost the performance of several existing conversational search methods. Our source code is available at urlhttpsgithub.commorecrySSP.
Reliever Relieving the Burden of Costly Model Fits for Changepoint Detection ; We propose a general methodology Reliever for fast and reliable changepoint detection when the model fitting is costly. Instead of fitting a sequence of models for each potential search interval, Reliever employs a substantially reduced number of proxyrelief models that are trained on a predetermined set of intervals. This approach can be seamlessly integrated with stateoftheart changepoint search algorithms. In the context of highdimensional regression models with changepoints, we establish that the Reliever, when combined with an optimal search scheme, achieves estimators for both the changepoints and corresponding regression coefficients that attain optimal rates of convergence, up to a logarithmic factor. Through extensive numerical studies, we showcase the ability of Reliever to rapidly and accurately detect changes across a diverse range of parametric and nonparametric changepoint models.
On Conditional and Compositional Language Model Differentiable Prompting ; Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model PLM to perform well on downstream tasks. Prompts can be represented by a humanengineered word sequence or by a learned continuous embedding. In this work, we investigate conditional and compositional differentiable prompting. We propose a new model, Prompt Production System PRopS, which learns to transform task instructions or input metadata, into continuous prompts that elicit taskspecific outputs from the PLM. Our model uses a modular network structure based on our neural formulation of Production Systems, which allows the model to learn discrete rules neural functions that learn to specialize in transforming particular prompt input patterns, making it suitable for compositional transfer learning and fewshot learning. We present extensive empirical and theoretical analysis and show that PRopS consistently surpasses other PLM adaptation techniques, and often improves upon fully finetuned models, on compositional generalization tasks, controllable summarization and multilingual translation, while needing fewer trainable parameters.
Human Trajectory Forecasting with Explainable Behavioral Uncertainty ; Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to selfdriving cars, and therefore has been heavily investigated. Most existing methods can be divided into modelfree and modelbased methods. Modelfree methods offer superior prediction accuracy but lack explainability, while modelbased methods provide explainability but cannot predict well. Combining both methodologies, we propose a new Bayesian Neural Stochastic Differential Equation model BNSPSFM, where a behavior SDE model is combined with Bayesian neural networks BNNs. While the NNs provide superior predictive power, the SDE offers strong explainability with quantifiable uncertainty in behavior and observation. We show that BNSPSFM achieves up to a 50 improvement in prediction accuracy, compared with 11 stateoftheart methods. BNSPSFM also generalizes better to drastically different scenes with different environments and crowd densities 20 times higher than the testing data. Finally, BNSPSFM can provide predictions with confidence to better explain potential causes of behaviors. The code will be released upon acceptance.
Crossway Diffusion Improving Diffusionbased Visuomotor Policy via Selfsupervised Learning ; Sequence modeling approaches have shown promising results in robot imitation learning. Recently, diffusion models have been adopted for behavioral cloning in a sequence modeling fashion, benefiting from their exceptional capabilities in modeling complex data distributions. The standard diffusionbased policy iteratively generates action sequences from random noise conditioned on the input states. Nonetheless, the model for diffusion policy can be further improved in terms of visual representations. In this work, we propose Crossway Diffusion, a simple yet effective method to enhance diffusionbased visuomotor policy learning via a carefully designed state decoder and an auxiliary selfsupervised learning SSL objective. The state decoder reconstructs raw image pixels and other state information from the intermediate representations of the reverse diffusion process. The whole model is jointly optimized by the SSL objective and the original diffusion loss. Our experiments demonstrate the effectiveness of Crossway Diffusion in various simulated and realworld robot tasks, confirming its consistent advantages over the standard diffusionbased policy and substantial improvements over the baselines.
KDSTM Neural Semisupervised Topic Modeling with Knowledge Distillation ; In text classification tasks, fine tuning pretrained language models like BERT and GPT3 yields competitive accuracy; however, both methods require pretraining on large text datasets. In contrast, general topic modeling methods possess the advantage of analyzing documents to extract meaningful patterns of words without the need of pretraining. To leverage topic modeling's unsupervised insights extraction on text classification tasks, we develop the Knowledge Distillation Semisupervised Topic Modeling KDSTM. KDSTM requires no pretrained embeddings, few labeled documents and is efficient to train, making it ideal under resource constrained settings. Across a variety of datasets, our method outperforms existing supervised topic modeling methods in classification accuracy, robustness and efficiency and achieves similar performance compare to state of the art weakly supervised text classification methods.
Traversability Analysis for Autonomous Driving in Complex Environment A LiDARbased Terrain Modeling Approach ; For autonomous driving, traversability analysis is one of the most basic and essential tasks. In this paper, we propose a novel LiDARbased terrain modeling approach, which could output stable, complete and accurate terrain models and traversability analysis results. As terrain is an inherent property of the environment that does not change with different view angles, our approach adopts a multiframe information fusion strategy for terrain modeling. Specifically, a normal distributions transform mapping approach is adopted to accurately model the terrain by fusing information from consecutive LiDAR frames. Then the spatialtemporal Bayesian generalized kernel inference and bilateral filtering are utilized to promote the stability and completeness of the results while simultaneously retaining the sharp terrain edges. Based on the terrain modeling results, the traversability of each region is obtained by performing geometric connectivity analysis between neighboring terrain regions. Experimental results show that the proposed method could run in realtime and outperforms stateoftheart approaches.
A model local interpretation routine for deep learning based radio galaxy classification ; Radio galaxy morphological classification is one of the critical steps when producing source catalogues for largescale radio continuum surveys. While many recent studies attempted to classify source radio morphology from survey image data using deep learning algorithms i.e., Convolutional Neural Networks, they concentrated on model robustness most time. It is unclear whether a model similarly makes predictions as radio astronomers did. In this work, we used Local Interpretable Modelagnostic Explanation LIME, an stateoftheart eXplainable Artificial Intelligence XAI technique to explain model prediction behaviour and thus examine the hypothesis in a proofofconcept manner. In what follows, we describe how textbfLIME generally works and early results about how it helped explain predictions of a radio galaxy classification model using this technique.
Parametrised polyconvex hyperelasticity with physicsaugmented neural networks ; In the present work, neural networks are applied to formulate parametrised hyperelastic constitutive models. The models fulfill all common mechanical conditions of hyperelasticity by construction. In particular, partially inputconvex neural network pICNN architectures are applied based on feedforward neural networks. Receiving two different sets of input arguments, pICNNs are convex in one of them, while for the other, they represent arbitrary relationships which are not necessarily convex. In this way, the model can fulfill convexity conditions stemming from mechanical considerations without being too restrictive on the functional relationship in additional parameters, which may not necessarily be convex. Two different models are introduced, where one can represent arbitrary functional relationships in the additional parameters, while the other is monotonic in the additional parameters. As a first proof of concept, the model is calibrated to data generated with two differently parametrised analytical potentials, whereby three different pICNN architectures are investigated. In all cases, the proposed model shows excellent performance.
Opacity of Parametric Discrete Event Systems Models, Decidability, and Algorithms ; Finite automata FAs model is a popular tool to characterize discrete event systems DESs due to its succinctness. However, for some complex systems, it is difficult to describe the necessary details by means of FAs model. In this paper, we consider a kind of extended finite automata EFAs in which each transition carries a predicate over state and event parameters. We also consider a type of simplified EFAs, called EventParameters EFAs EPEFAs, where the state parameters are removed. Based upon these two parametric models, we investigate the problem of opacity analysis for parametric DESs. First of all, it is shown that EFAs model is more expressive than EPEFAs model. Secondly, it is proved that the opacity properties for EFAs are undecidable in general. Moreover, the decidable opacity properties for EPEFAs are investigated. We present the verification algorithms for currentstate opacity, initialstate opacity and infinitestep opacity, and then discuss the complexity. This paper establishes a preliminary theory for the opacity of parametric DESs, which lays a foundation for the opacity analysis of complex systems.
KDE Based Coarsegraining of Semicrystalline Systems with Correlated Threebody Intramolecular Interaction ; We present an extension to the iterative Boltzmann inversion method to generate coarsegrained models with threebody intramolecular potentials that can reproduce correlations in structural distribution functions. The coarsegrained structural distribution functions are computed using kernel density estimates to produce analytically differentiable distribution functions with controllable smoothening via the kernel bandwidth parameters. Bicubic interpolation is used to accurately interpolate the threebody potentials trained by the method. To demonstrate this new approach, a coarsegrained model of polyethylene is constructed in which each bead represents an ethylene monomer. The resulting model reproduces the radial density function as well as the joint probability distribution of bondlength and bondangles sampled from target atomistic simulations with only a 10 increase in the computational cost compared to models with independent bondlength and bondangle potentials. Analysis of the predicted crystallization kinetics of the model developed by the new approach reveals that the bandwidth parameters can be tuned to accelerate the modeling of polymer crystallization. Specifically, computing target RDF with larger bandwidth slows down the secondary crystallization, and increasing the bandwidth in thetadirection of bondlength and bondangle distribution reduces the primary crystallization rate.
Noresonance conditions, random matrices, and quantum chaotic models ; In this article we investigate noresonance conditions for quantum chaotic and random matrix models. Noresonance conditions are properties on the spectrum of a model, usually employed as a theoretical tool in the analysis of late time dynamics. The first order noresonance condition holds when a spectrum is nondegenerate, while higher order noresonance conditions imply sums of an equal number of energies are nondegenerate outside of permutations of the indices. The condition is usually assumed to hold for quantum chaotic models. In this work we use several tests from random matrix theory to demonstrate that noresonance conditions are likely to be violated for all equal sums containing greater than one energy. This is due to the presence of levelattraction in the spectra after resolving appropriate symmetries. This result is produced for both a quantum chaotic Hamiltonian and two random matrix models. We then generalize important bounds in quantum equilibration theory to a case where the conditions are violated, and to the case of random matrix models.
Enhanced Universal Kriging for Transformed Input Parameter Spaces ; With computational models becoming more expensive and complex, surrogate models have gained increasing attention in many scientific disciplines and are often necessary to conduct sensitivity studies, parameter optimization etc. In the scientific discipline of uncertainty quantification UQ, model input quantities are often described by probability distributions. For the construction of surrogate models, spacefilling designs are generated in the input space to define training points, and evaluations of the computational model at these points are then conducted. The physical parameter space is often transformed into an i.i.d. uniform input space in order to apply spacefilling training procedures in a sensible way. Due to this transformation surrogate modeling techniques tend to suffer with regard to their prediction accuracy. Therefore, a new method is proposed in this paper where input parameter transformations are applied to basis functions for universal kriging. To speed up hyperparameter optimization for universal kriging, suitable expressions for efficient gradientbased optimization are developed. Several benchmark functions are investigated and the proposed method is compared with conventional methods.
Crucible Graphical Test Cases for Alloy Models ; Alloy is a declarative modeling language that is well suited for verifying system designs. Alloy models are automatically analyzed using the Analyzer, a toolset that helps the user understand their system by displaying the consequences of their properties, helping identify any missing or incorrect properties, and exploring the impact of modifications to those properties. To achieve this, the Analyzer invokes offtheshelf SAT solvers to search for scenarios, which are assignments to the sets and relations of the model such that all executed formulas hold. To help write more accurate software models, Alloy has a unit testing framework, AUnit, which allows users to outline specific scenarios and check if those scenarios are correctly generated or prevented by their model. Unfortunately, AUnit currently only supports textual specifications of scenarios. This paper introduces Crucible, which allows users to graphically create AUnit test cases. In addition, Crucible provides automated guidance to users to ensure they are creating well structured, valuable test cases. As a result, Crucible eases the burden of adopting AUnit and brings AUnit test case creation more in line with how Alloy scenarios are commonly interacted with, which is graphically.
A Surrogate Data Assimilation Model for the Estimation of Dynamical System in a Limited Area ; We propose a novel learningbased surrogate data assimilation DA model for efficient state estimation in a limited area. Our model employs a feedforward neural network for online computation, eliminating the need for integrating highdimensional limitedarea models. This approach offers significant computational advantages over traditional DA algorithms. Furthermore, our method avoids the requirement of lateral boundary conditions for the limitedarea model in both online and offline computations. The design of our surrogate DA model is built upon a robust theoretical framework that leverages two fundamental concepts observability and effective region. The concept of observability enables us to quantitatively determine the optimal amount of observation data necessary for accurate DA. Meanwhile, the concept of effective region substantially reduces the computational burden associated with computing observability and generating training data.