text
stringlengths
62
2.94k
Attention and SelfAttention in Random Forests ; New models of random forests jointly using the attention and selfattention mechanisms are proposed for solving the regression problem. The models can be regarded as extensions of the attentionbased random forest whose idea stems from applying a combination of the NadarayaWatson kernel regression and the Huber's contamination model to random forests. The selfattention aims to capture dependencies of the tree predictions and to remove noise or anomalous predictions in the random forest. The selfattention module is trained jointly with the attention module for computing weights. It is shown that the training process of attention weights is reduced to solving a single quadratic or linear optimization problem. Three modifications of the general approach are proposed and compared. A specific multihead selfattention for the random forest is also considered. Heads of the selfattention are obtained by changing its tuning parameters including the kernel parameters and the contamination parameter of models. Numerical experiments with various datasets illustrate the proposed models and show that the supplement of the selfattention improves the model performance for many datasets.
Nonlinear wave interactions in geochemical modeling ; This paper is concerned with the study of the main wave interactions in a system of conservation laws in geochemical modeling. We study the modeling of the chemical complexes on the rock surface. The presence of stable surface complexes affects the relative permeability. We add terms representing surface complexes to the accumulation function in the model presented in citelambert2019nonlinear1. This addition allows to take into account the interaction of ions with the rock surface in the modeling of the oil recovery by the injection of carbonated water. Compatibility hypotheses with the modeling are made on the coefficients of the system to obtain meaningful solutions. We developed a Riemann solver taking into account the complexity of the interactions and bifurcations of nonlinear waves. Such bifurcations occur at the inflection and resonance surfaces. We present the solution of a generalized eigenvalue problem in a n1dimensional case, which allows the construction of rarefaction curves. A method to find the discontinuous solutions is also presented. We find the solution path for some examples.
Duality relations in singlefile diffusion ; Singlefile transport, which corresponds to the diffusion of particles that cannot overtake each other in narrow channels, is an important topic in outofequilibrium statistical physics. Various microscopic models of singlefile systems have been considered, such as the simple exclusion process, which has reached the status of a paradigmatic model. Several different models of singlefile diffusion have been shown to be related by a duality relation, which holds either microscopically or only in the hydrodynamic limit of large time and large distances. Here, we show that, within the framework of fluctuating hydrodynamics, these relations are not specific to these models and that, in the hydrodynamic limit, every singlefile system can be mapped onto a dual singlefile system, which we characterise. This general duality relation allows us to obtain new results for different models, by exploiting the solutions that are available for their dual model.
Twitmo A Twitter Data Topic Modeling and Visualization Package for R ; We present Twitmo, a package that provides a broad range of methods to collect, preprocess, analyze and visualize geotagged Twitter data. Twitmo enables the user to collect geotagged Tweets from Twitter and and provides a comprehensive and userfriendly toolbox to generate topic distributions from Latent Dirichlet Allocations LDA, correlated topic models CTM and structural topic models STM. Functions are included for preprocessing of text, model building and prediction. In addition, one of the innovations of the package is the automatic pooling of Tweets into longer pseudodocuments using hashtags and cosine similarities for better topic coherence. The package additionally comes with functionality to visualize collected data sets and fitted models in static as well as interactive ways and offers builtin support for model visualizations via LDAvis providing great convenience for researchers in this area. The Twitmo package is an innovative toolbox that can be used to analyze public discourse of various topics, political parties or persons of interest in space and time.
Cosmic acceleration and ekpyrotic bounce with Chameleon field ; In this article, we explore the homogeneous and isotropic flat FriedmannRobertsonWalker FRW model in Chameleon cosmology. By considering a nonminimal coupling between the scalar field and matter, we present a nonsingular bouncing cosmological scenario of the universe. The universe initially exhibits the ekpyrotic phase during the contracting era, undergoes a nonsingular bounce, and then in expanding era, it smoothly transits to the decelerating era having matter and radiation dominated phases. Further, this decelerating era is smoothly connected to the latetime dark energydominated era of the present epoch. We use numerical solution techniques to solve nonminimally coupled gravity equations for understanding the evolution of scalar field along with other quantities like effective potential in the model. The model thus unifies an ekpyrotic, nonsingular, asymmetric bounce with the dark energy era of the present epoch. We study the evolution of bouncing model and confront the model with observational results on the equation of state parameter by constraining the model parameters.
PhysicsInformed Learning of Aerosol Microphysics ; Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. In order to represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAMHAM global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of inputoutput pairs to train a neural network on it. We are able to learn the variables' tendencies achieving an average R2 score of 77.1 . We further explore methods to inform and constrain the neural network with physical knowledge to reduce mass violation and enforce mass positivity. On a GPU we achieve a speedup of up to over 64x compared to the original model.
Uncertainty quantification in mechanistic epidemic models via crossentropy approximate Bayesian computation ; This paper proposes a datadriven approximate Bayesian computation framework for parameter estimation and uncertainty quantification of epidemic models, which incorporates two novelties i the identification of the initial conditions by using plausible dynamic states that are compatible with observational data; ii learning of an informative prior distribution for the model parameters via the crossentropy method. The new methodology's effectiveness is illustrated with the aid of actual data from the COVID19 epidemic in Rio de Janeiro city in Brazil, employing an ordinary differential equationbased model with a generalized SEIR mechanistic structure that includes timedependent transmission rate, asymptomatics, and hospitalizations. A minimization problem with two cost terms number of hospitalizations and deaths is formulated, and twelve parameters are identified. The calibrated model provides a consistent description of the available data, able to extrapolate forecasts over a few weeks, making the proposed methodology very appealing for realtime epidemic modeling.
Representing Random Utility Choice Models with Neural Networks ; Motivated by the successes of deep learning, we propose a class of neural networkbased discrete choice models, called RUMnets, inspired by the random utility maximization RUM framework. This model formulates the agents' random utility function using a sample average approximation. We show that RUMnets sharply approximate the class of RUM discrete choice models any model derived from random utility maximization has choice probabilities that can be approximated arbitrarily closely by a RUMnet. Reciprocally, any RUMnet is consistent with the RUM principle. We derive an upper bound on the generalization error of RUMnets fitted on choice data, and gain theoretical insights on their ability to predict choices on new, unseen data depending on critical parameters of the dataset and architecture. By leveraging opensource libraries for neural networks, we find that RUMnets are competitive against several choice modeling and machine learning methods in terms of predictive accuracy on two realworld datasets.
Viskositas Viscosity Prediction of Multicomponent Chemical Systems ; Viscosity in the metallurgical and glass industry plays a fundamental role in its production processes, also in the area of geophysics. As its experimental measurement is financially expensive, also in terms of time, several mathematical models were built to provide viscosity results as a function of several variables, such as chemical composition and temperature, in linear and nonlinear models. A database was built in order to produce a nonlinear model by artificial neural networks by variation of hyperparameters to provide reliable predictions of viscosity in relation to chemical systems and temperatures. The model produced named Viskositas demonstrated better statistical evaluations of mean absolute error, standard deviation and coefficient of determination in relation to the test database when compared to different models from literature and 1 commercial model, offering predictions with lower errors, less variability and less generation of outliers.
Universal approximation theorems for continuous functions of cadlag paths and Levytype signature models ; We prove a universal approximation theorem that allows to approximate continuous functionals of cadlag rough paths uniformly in time and on compact sets of paths via linear functionals of their timeextended signature. Our main motivation to treat this question comes from signaturebased models for finance that allow for the inclusion of jumps. Indeed, as an important application, we define a new class of universal signature models based on an augmented L'evy process, which we call L'evytype signature models. They extend continuous signature models for asset prices as proposed e.g. by Arribas et al.2020 in several directions, while still preserving universality and tractability properties. To analyze this, we first show that the signature process of a generic multivariate L'evy process is a polynomial process on the extended tensor algebra and then use this for pricing and hedging approaches within L'evytype signature models.
Bayesian regularization of empirical MDPs ; In most applications of modelbased Markov decision processes, the parameters for the unknown underlying model are often estimated from the empirical data. Due to noise, the policy learnedfrom the estimated model is often far from the optimal policy of the underlying model. When applied to the environment of the underlying model, the learned policy results in suboptimal performance, thus calling for solutions with better generalization performance. In this work we take a Bayesian perspective and regularize the objective function of the Markov decision process with prior information in order to obtain more robust policies. Two approaches are proposed, one based on L1 regularization and the other on relative entropic regularization. We evaluate our proposed algorithms on synthetic simulations and on realworld search logs of a large scale online shopping store. Our results demonstrate the robustness of regularized MDP policies against the noise present in the models.
Subgame perfect Nash equilibrium for dynamic pricing competition with finite planning horizon ; Having fixed capacities, homogeneous products and price sensitive customer purchase decision are primary distinguishing characteristics of numerous revenue management systems. Even with two or three rivals, competition is still highly fierce. This paper studies subgame perfect Nash equilibrium of a price competition in an oligopoly market with perishable assets. Sellers each has one unit of a good that cannot be replenished, and they compete in setting prices to sell their good over a finite sales horizon. Each period, buyers desire one unit of the good and the number of buyers coming to the market in each period is random. All sellers' prices are accessible for buyers, and search is costless. Using stochastic dynamic programming methods, the best response of sellers can be obtained from a oneshot price competition game regarding remained periods and the currenttime demand structure. Assuming a binary demand model, we demonstrate that the duopoly model has a unique Nash equilibrium and the oligopoly model does not reveal price dispersion with respect to a particular metric. We illustrate that, when considering a generalized demand model, the duopoly model has a unique mixed strategy Nash equilibrium while the oligopoly model has a unique symmetric mixed strategy Nash equilibrium.
Topological States in TwoDimensional SuSchriefferHeeger Models ; We study the topological properties of the generalized twodimensional 2D SuSchriefferHeeger SSH models. We show that a pair of Dirac points appear in the Brillouin zone BZ, consisting a semimetallic phase. Interestingly, the locations of these Dirac points are not pinned to any highsymmetry points of the BZ but tunable by model parameters. Moreover, the merging of two Dirac points undergoes a novel topological phase transition, which leads to either a weak topological insulator or a nodalline metallic phase. We demonstrate these properties by constructing two specific models, which we referred as typeI and typeII 2D SSH models. The feasible experimental platforms to realize our models are also discussed.
An Unconstrained Symmetric Nonnegative Latent Factor Analysis for Largescale Undirected Weighted Networks ; Largescale undirected weighted networks are usually found in big datarelated research fields. It can naturally be quantified as a symmetric highdimensional and incomplete SHDI matrix for implementing big data analysis tasks. A symmetric nonnegative latentfactoranalysis SNL model is able to efficiently extract latent factors LFs from an SHDI matrix. Yet it relies on a constraintcombination training scheme, which makes it lack flexibility. To address this issue, this paper proposes an unconstrained symmetric nonnegative latentfactoranalysis USNL model. Its main idea is twofold 1 The output LFs are separated from the decision parameters via integrating a nonnegative mapping function into an SNL model; and 2 Stochastic gradient descent SGD is adopted for implementing unconstrained model training along with ensuring the output LFs nonnegativity. Empirical studies on four SHDI matrices generated from real big data applications demonstrate that an USNL model achieves higher prediction accuracy of missing data than an SNL model, as well as highly competitive computational efficiency.
Optimal response surface designs in the presence of model contamination ; Complete reliance on the fitted model in response surface experiments is risky and relaxing this assumption, whether out of necessity or intentionally, requires an experimenter to account for multiple conflicting objectives. This work provides a methodological framework of a compound optimality criterion comprising elementary criteria responsible for i the quality of the confidence regionbased inference to be done using the fitted model DPLPoptimality; ii improving the ability to test for the lackoffit from specified potential model contamination in the form of extra polynomial terms; and iii simultaneous minimisation of the variance and bias of the fitted model parameters arising from this misspecification. The latter two components have been newly developed in accordance with the modelindependent 'pure error' approach to the error estimation. The compound criteria and design construction were adapted to restricted randomisation frameworks blocked and multistratum experiments, where the stratumbystratum approach was adopted. A pointexchange algorithm was employed for searching for nearly optimal designs. The theoretical work is accompanied by one real and two illustrative examples to explore the relationship patterns among the individual components and characteristics of the optimal designs, demonstrating the attainable compromises across the competing objectives and driving some general practical recommendations.
Dynamics of cold random hyperbolic graphs with link persistence ; We consider and analyze a dynamic model of random hyperbolic graphs with link persistence. In the model, both connections and disconnections can be propagated from the current to the next snapshot with probability omega in 0, 1. Otherwise, with probability 1omega, connections are reestablished according to the random hyperbolic graphs model. We show that while the persistence probability omega affects the averages of the contact and intercontact distributions, it does not affect the tails of these distributions, which decay as power laws with exponents that do not depend on omega. We also consider examples of real temporal networks, and we show that the considered model can adequately reproduce several of their dynamical properties. Our results advance our understanding of the realistic modeling of temporal networks and of the effects of link persistence on temporal network properties.
Personalizing or Not Dynamically Personalized Federated Learning with Incentives ; Personalized federated learning FL facilitates collaborations between multiple clients to learn personalized models without sharing private data. The mechanism mitigates the statistical heterogeneity commonly encountered in the system, i.e., nonIID data over different clients. Existing personalized algorithms generally assume all clients volunteer for personalization. However, potential participants might still be reluctant to personalize models since they might not work well. In this case, clients choose to use the global model instead. To avoid making unrealistic assumptions, we introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL. This dynamically personalized FL technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better. We show that the algorithmic pipeline in DyPFL guarantees good convergence performance, allowing it to outperform alternative personalized methods in a broad range of conditions, including variation in heterogeneity, number of clients, local epochs, and batch sizes.
On the MapTerritory Fallacy Fallacy ; This paper presents a metatheory of the usage of the free energy principle FEP and examines its scope in the modelling of physical systems. We consider the socalled mapterritory fallacy' and the fallacious reification of model properties. By showing that the FEP is a consistent, physicsinspired theory of inferences of inferences, we disprove the assertion that the mapterritory fallacy contradicts the principled usage of the FEP. As such, we argue that deploying the mapterritory fallacy to criticise the use of the FEP and Bayesian mechanics itself constitutes a fallacy what we call the it mapterritory fallacy fallacy. In so doing, we emphasise a few key points the uniqueness of the FEP as a model of particles or agents that model their environments; the restoration of convention to the FEP via its relation to the principle of constrained maximum entropy; the Jaynes optimality' of the FEP under this relation; and finally, the way that this metatheoretical approach to the FEP clarifies its utility and scope as a formal modelling tool. Taken together, these features make the FEP, uniquely, it the ideal model of generic systems in statistical physics.
Diffusionbased Time Series Imputation and Forecasting with Structured State Space Models ; The imputation of missing values represents a significant obstacle for many realworld data analysis pipelines. Here, we focus on time series data and put forward SSSD, an imputation model that relies on two emerging technologies, conditional diffusion models as stateoftheart generative models and structured state space models as internal model architecture, which are particularly suited to capture longterm dependencies in time series data. We demonstrate that SSSD matches or even exceeds stateoftheart probabilistic imputation and forecasting performance on a broad range of data sets and different missingness scenarios, including the challenging blackoutmissing scenarios, where prior approaches failed to provide meaningful results.
MockingBERT A Method for Retroactively Adding Resilience to NLP Models ; Protecting NLP models against misspellings whether accidental or adversarial has been the object of research interest for the past few years. Existing remediations have typically either compromised accuracy or required full model retraining with each new class of attacks. We propose a novel method of retroactively adding resilience to misspellings to transformerbased NLP models. This robustness can be achieved without the need for retraining of the original NLP model and with only a minimal loss of language understanding performance on inputs without misspellings. Additionally we propose a new efficient approximate method of generating adversarial misspellings, which significantly reduces the cost needed to evaluate a model's resilience to adversarial attacks.
A multiplicitypreserving crossover operator on graphs. Extended version ; Evolutionary algorithms usually explore a search space of solutions by means of crossover and mutation. While a mutation consists of a small, local modification of a solution, crossover mixes the genetic information of two solutions to compute a new one. For modeldriven optimization MDO, where models directly serve as possible solutions instead of first transforming them into another representation, only recently a generic crossover operator has been developed. Using graphs as a formal foundation for models, we further refine this operator in such a way that additional wellformedness constraints are preserved We prove that, given two models that satisfy a given set of multiplicity constraints as input, our refined crossover operator computes two new models as output that also satisfy the set of constraints.
Modeling ParagraphLevel VisionLanguage Semantic Alignment for MultiModal Summarization ; Most current multimodal summarization methods follow a cascaded manner, where an offtheshelf object detector is first used to extract visual features, then these features are fused with language representations to generate the summary with an encoderdecoder model. The cascaded way cannot capture the semantic alignments between images and paragraphs, which are crucial to a precise summary. In this paper, we propose ViLSum to jointly model paragraphlevel textbfVisiontextbfLanguage Semantic Alignment and MultiModal textbfSummarization. The core of ViLSum is a joint multimodal encoder with two welldesigned tasks, image reordering and image selection. The joint multimodal encoder captures the interactions between modalities, where the reordering task guides the model to learn paragraphlevel semantic alignment and the selection task guides the model to selected summaryrelated images in the final summary. Experimental results show that our proposed ViLSum significantly outperforms current stateoftheart methods. In further analysis, we find that two welldesigned tasks and joint multimodal encoder can effectively guide the model to learn reasonable paragraphsimages and summaryimages relations.
Maximum Likelihood on the Joint Data, Condition Distribution for Solving IllPosed Problems with Conditional Flow Models ; I describe a trick for training flow models using a prescribed rule as a surrogate for maximum likelihood. The utility of this trick is limited for nonconditional models, but an extension of the approach, applied to maximum likelihood of the joint probability distribution of data and conditioning information, can be used to train sophisticated textitconditional flow models. Unlike previous approaches, this method is quite simple it does not require explicit knowledge of the distribution of conditions, auxiliary networks or other specific architecture, or additional loss terms beyond maximum likelihood, and it preserves the correspondence between latent and data spaces. The resulting models have all the properties of nonconditional flow models, are robust to unexpected inputs, and can predict the distribution of solutions conditioned on a given input. They come with guarantees of prediction representativeness and are a natural and powerful way to solve highly uncertain problems. I demonstrate these properties on easily visualized toy problems, then use the method to successfully generate classconditional images and to reconstruct highly degraded images via superresolution.
Multiscale equilibration of highly entangled isotropic model polymer melts ; We present a computationally efficient multiscale method for preparing equilibrated, isotropic long chain model polymer melts. As an application we generate KremerGrest melts of 1000 chains with 200 entanglements and 250002000 beads per chain, which cover the experimentally relevant bending rigidities up to and beyond the limit of the isotropicnematic transition. In the first step, we employ Monte Carlo simulations of a lattice model to equilibrate the largescale chain structure above the tube scale while ensuring a spatially homogeneous density distribution. We then use theoretical insight from a constrained mode tube model to introduce the bead degrees of freedom together with random walk conformational statistics all the way down to the Kuhn scale of the chains. This is followed by a sequence of simulations with carefully parameterized forcecapped beadspring models, which slowly introduce the local bead packing while reproducing the larger scale chain statistics of the target KremerGrest system at all levels of forcecapping. Finally we can switch to the full KremerGrest model without perturbing the structure. The resulting chain statistics is in excellent agreement with literature results on all length scales accessible in bruteforce simulations of shorter chains.
System Resilience through Health Monitoring and Reconfiguration ; We demonstrate an endtoend framework to improve the resilience of manmade systems to unforeseen events. The framework is based on a physicsbased digital twin model and three modules tasked with realtime fault diagnosis, prognostics and reconfiguration. The fault diagnosis module uses modelbased diagnosis algorithms to detect and isolate faults and generates interventions in the system to disambiguate uncertain diagnosis solutions. We scale up the fault diagnosis algorithm to the required realtime performance through the use of parallelization and surrogate models of the physicsbased digital twin. The prognostics module tracks the fault progressions and trains the online degradation models to compute remaining useful life of system components. In addition, we use the degradation models to assess the impact of the fault progression on the operational requirements. The reconfiguration module uses PDDLbased planning endowed with semantic attachments to adjust the system controls so that the fault impact on the system operation is minimized. We define a resilience metric and use the example of a fuel system model to demonstrate how the metric improves with our framework.
Variable selection in sparse multivariate GLARMA models Application to germination control by environment ; We propose a novel and efficient iterative twostage variable selection approach for multivariate sparse GLARMA models, which can be used for modelling multivariate discretevalued time series. Our approach consists in iteratively combining two steps the estimation of the autoregressive moving average ARMA coefficients of multivariate GLARMA models and the variable selection in the coefficients of the Generalized Linear Model GLM part of the model performed by regularized methods. We explain how to implement our approach efficiently. Then we assess the performance of our methodology using synthetic data and compare it with alternative methods. Finally, we illustrate it on RNASeq data resulting from polyribosome profiling to determine translational status for all mRNAs in germinating seeds. Our approach, which is implemented in the MultiGlarmaVarSel R package and available on the CRAN, is very attractive since it benefits from a low computational load and is able to outperform the other methods for recovering the null and nonnull coefficients.
A rapidprototype MPC tool based on gPROMS platform ; This paper presents a rapidprototype Model Predictive Control MPC tool based on the gPROMS platform, with the support for the whole MPC design workflow. The gPROMSMPC tool can not only directly interact with a firstprinciplebased gPROMS model for closedloop simulations but also utilizes its mathematical information to derive simplified controloriented models, basically via linearization techniques. It can inherit the interpretability of the firstprinciplebased gPROMS model, unlike the PAROC framework in which the controloriented models are obtained from blackbox system identification based on gPROMS simulation data. The gPROMSMPC tool allows users to choose when to linearize such as at each sampling time successive linearization or some specific points to obtain one or multiple good linear models. The gPROMSMPC tool implements our previous constructionfree CDAL and the online parametric activeset qpOASES algorithms to solve sparse or condensed MPC problem formulations, respectively, for possible successive linearization or high statedimension cases. Our CDAL algorithm is also matrixfree and libraryfree, thus supporting embedded Ccode generation. After many example validations of the tool, here we only show one example to investigate the performance of different MPC schemes.
Investigating the Impact of Model Misspecification in Neural Simulationbased Inference ; Aided by advances in neural density estimation, considerable progress has been made in recent years towards a suite of simulationbased inference SBI methods capable of performing flexible, blackbox, approximate Bayesian inference for stochastic simulation models. While it has been demonstrated that neural SBI methods can provide accurate posterior approximations, the simulation studies establishing these results have considered only wellspecified problems that is, where the model and the data generating process coincide exactly. However, the behaviour of such algorithms in the case of model misspecification has received little attention. In this work, we provide the first comprehensive study of the behaviour of neural SBI algorithms in the presence of various forms of model misspecification. We find that misspecification can have a profoundly deleterious effect on performance. Some mitigation strategies are explored, but no approach tested prevents failure in all cases. We conclude that new approaches are required to address model misspecification if neural SBI algorithms are to be relied upon to derive accurate scientific conclusions.
Reconstructing ActionConditioned HumanObject Interactions Using Commonsense Knowledge Priors ; We present a method for inferring diverse 3D models of humanobject interactions from images. Reasoning about how humans interact with objects in complex scenes from a single 2D image is a challenging task given ambiguities arising from the loss of information through projection. In addition, modeling 3D interactions requires the generalization ability towards diverse object categories and interaction types. We propose an actionconditioned modeling of interactions that allows us to infer diverse 3D arrangements of humans and objects without supervision on contact regions or 3D scene geometry. Our method extracts highlevel commonsense knowledge from large language models such as GPT3, and applies them to perform 3D reasoning of humanobject interactions. Our key insight is priors extracted from large language models can help in reasoning about humanobject contacts from textural prompts only. We quantitatively evaluate the inferred 3D models on a large humanobject interaction dataset and show how our method leads to better 3D reconstructions. We further qualitatively evaluate the effectiveness of our method on real images and demonstrate its generalizability towards interaction types and object categories.
A Computationally Efficient algorithm to estimate the Parameters of a TwoDimensional Chirp Model with the product term ; Chirp signal models and their generalizations have been used to model many natural and manmade phenomena in signal processing and time series literature. In recent times, several methods have been proposed for parameter estimation of these models. These methods however are either statistically suboptimal or computationally burdensome, specially for two dimensional 2D chirp models. In this paper, we consider the problem of parameter estimation of 2D chirp models and propose a computationally efficient estimator and establish asymptotic theoretical properties of the proposed estimators. And the proposed estimators are observed to have the same rates of convergence as the least squares estimators LSEs. Furthermore, the proposed estimators of chirp rate parameters are shown to be asymptotically optimal. Extensive and detailed numerical simulations are conducted, which support theoretical results of the proposed estimators.
Data Feedback Loops Modeldriven Amplification of Dataset Biases ; Datasets scraped from the internet have been critical to the successes of largescale machine learning. Yet, this very success puts the utility of future internetderived datasets at potential risk, as model outputs begin to replace human annotations as a source of supervision. In this work, we first formalize a system where interactions with one model are recorded as history and scraped as training data in the future. We then analyze its stability over time by tracking changes to a testtime bias statistic e.g. gender bias of model predictions. We find that the degree of bias amplification is closely linked to whether the model's outputs behave like samples from the training distribution, a behavior which we characterize and define as consistent calibration. Experiments in three conditional prediction scenarios image classification, visual rolelabeling, and language generation demonstrate that models that exhibit a samplinglike behavior are more calibrated and thus more stable. Based on this insight, we propose an intervention to help calibrate and stabilize unstable feedback systems. Code is available at httpsgithub.comrtaoridatafeedback.
DECK Behavioral Tests to Improve Interpretability and Generalizability of BERT Models Detecting Depression from Text ; Models that accurately detect depression from text are important tools for addressing the postpandemic mental health crisis. BERTbased classifiers' promising performance and the offtheshelf availability make them great candidates for this task. However, these models are known to suffer from performance inconsistencies and poor generalization. In this paper, we introduce the DECK DEpression ChecKlist, depressionspecific model behavioural tests that allow better interpretability and improve generalizability of BERT classifiers in depression domain. We create 23 tests to evaluate BERT, RoBERTa and ALBERT depression classifiers on three datasets, two Twitterbased and one clinical interviewbased. Our evaluation shows that these models 1 are robust to certain gendersensitive variations in text; 2 rely on the important depressive language marker of the increased use of first person pronouns; 3 fail to detect some other depression symptoms like suicidal ideation. We also demonstrate that DECK tests can be used to incorporate symptomspecific information in the training data and consistently improve generalizability of all three BERT models, with an outofdistribution F1score increase of up to 53.93.
SEEK model extraction attack against hybrid secure inference protocols ; Security concerns about a machine learning model used in a predictionasaservice include the privacy of the model, the query and the result. Secure inference solutions based on homomorphic encryption HE andor multiparty computation MPC have been developed to protect all the sensitive information. One of the most efficient type of solution utilizes HE for linear layers, and MPC for nonlinear layers. However, for such hybrid protocols with semihonest security, an adversary can malleate the intermediate features in the inference process, and extract model information more effectively than methods against inference service in plaintext. In this paper, we propose SEEK, a general extraction method for hybrid secure inference services outputing only class labels. This method can extract each layer of the target model independently, and is not affected by the depth of the model. For ResNet18, SEEK can extract a parameter with less than 50 queries on average, with average error less than 0.03.
Mixing times for two classes of stochastically modeled reaction networks ; The past few decades have seen robust research on questions regarding the existence, form, and properties of stationary distributions of stochastically modeled reaction networks. When a stochastic model admits a stationary distribution an important practical question is what is the rate of convergence of the distribution of the process to the stationary distribution With the exception of citeXuHansenWiuf2022 pertaining to models whose state space is restricted to the nonnegative integers, there has been a notable lack of results related to this rate of convergence in the reaction network literature. This paper begins the process of filling that hole in our understanding. In this paper, we characterize this rate of convergence, via the mixing times of the processes, for two classes of stochastically modeled reaction networks. Specifically, by applying a FosterLyapunov criteria we establish exponential ergodicity for two classes of reaction networks introduced in citeanderson2018some. Moreover, we show that for one of the classes the convergence is uniform over the initial state.
Serialized Interacting Mixed Membership Stochastic Block Model ; Last years have seen a regain of interest for the use of stochastic block modeling SBM in recommender systems. These models are seen as a flexible alternative to tensor decomposition techniques that are able to handle labeled data. Recent works proposed to tackle discrete recommendation problems via SBMs by considering larger contexts as input data and by adding second order interactions between contexts' related elements. In this work, we show that these models are all special cases of a single global framework the Serialized Interacting Mixed membership Stochastic Block Model SIMSBM. It allows to model an arbitrarily large context as well as an arbitrarily high order of interactions. We demonstrate that SIMSBM generalizes several recent SBMbased baselines. Besides, we demonstrate that our formulation allows for an increased predictive power on six realworld datasets.
Longtime behavior of completely positively correlated Symbiotic Branching Model ; We study the longtime behavior of a continuous state Symbiotic Branching Model SBM. SBM can be seen as a unified model generalizing the Stepping Stone Model, Mutually Catalytic Branching Processes, and the Parabolic Anderson Model. It was introduced by Etheridge and Fleischmann in 2004. The key parameter in these models is the local correlation rho between the driving Brownian Motions. The longtime behavior of all SBM exhibits a dichotomy between coexistence and noncoexistence of the two populations depending on the recurrence and transience of the migration and also in many cases on the branching rate. The most significant gap in the understanding of the longtime behavior of SBM is for positive correlations in the transient regime. In this article we give a precise description of the longtime behavior of the SBM with rho1 with not necessarily identical initial conditions.
Learning from Mixed Datasets A Monotonic Image Quality Assessment Model ; Deep learning based image quality assessment IQA models usually learn to predict image quality from a single dataset, leading the model to overfit specific scenes. To account for this, mixed datasets training can be an effective way to enhance the generalization capability of the model. However, it is nontrivial to combine different IQA datasets, as their quality evaluation criteria, score ranges, view conditions, as well as subjects are usually not shared during the image quality annotation. In this paper, instead of aligning the annotations, we propose a monotonic neural network for IQA model learning with different datasets combined. In particular, our model consists of a datasetshared quality regressor and several datasetspecific quality transformers. The quality regressor aims to obtain the perceptual qualities of each dataset while each quality transformer maps the perceptual qualities to the corresponding dataset annotations with their monotonicity maintained. The experimental results verify the effectiveness of the proposed learning strategy and our code is available at httpsgithub.comfzp0424MonotonicIQA.
Text Revealer Private Text Reconstruction via Model Inversion Attacks against Transformers ; Text classification has become widely used in various natural language processing applications like sentiment analysis. Current applications often use large transformerbased language models to classify input texts. However, there is a lack of systematic study on how much private information can be inverted when publishing models. In this paper, we formulate emphText Revealer the first model inversion attack for text reconstruction against text classification with transformers. Our attacks faithfully reconstruct private texts included in training data with access to the target model. We leverage an external dataset and GPT2 to generate the target domainlike fluent text, and then perturb its hidden state optimally with the feedback from the target model. Our extensive experiments demonstrate that our attacks are effective for datasets with different text lengths and can reconstruct private texts with accuracy.
Bound luminosity state in the extended Dicke model ; The extended Dicke model describes interaction of the singlemode electromagnetic resonator with an ensemble of interacting twolevel systems. In this paper we obtain quasiclassical equations of motion of the extended Dicke model. For certain initial conditions and range of parameters the equations of motion can be solved analytically via Jacobi elliptic functions. The solution is a bound luminosity state, which was described by the authors previously for ordinary Dicke model and now is generalized for the case of the extended Dicke model. In this state the periodic beatings of the electromagnetic field occur in the microwave cavity filled with the ensemble of twolevel systems. At the beginning of the time period the energy is stored in the electromagnetic field in the cavity, then it is absorbed by the ensemble of twolevel systems, being afterwards released back to the cavity in the end of the period. Also the chaotic properties of the semiclassical model are investigated numerically.
Machine Learning and Analytical Power Consumption Models for 5G Base Stations ; The energy consumption of the fifth generation5G of mobile networks is one of the major concerns of the telecom industry. However, there is not currently an accurate and tractable approach to evaluate 5G base stations BSs power consumption. In this article, we propose a novel model for a realistic characterisation of the power consumption of 5G multicarrier BSs, which builds on a large data collection campaign. At first, we define a machine learning architecture that allows modelling multiple 5G BS products. Then, we exploit the knowledge gathered by this framework to derive a realistic and analytically tractable power consumption model, which can help driving both theoretical analyses as well as feature standardisation, development and optimisation frameworks. Notably, we demonstrate that such model has high precision, and it is able of capturing the benefits of energy saving mechanisms. We believe this analytical model represents a fundamental tool for understanding 5G BSs power consumption, and accurately optimising the network energy efficiency.
Online Policy Optimization for Robust MDP ; Reinforcement learning RL has exceeded human performance in many synthetic settings such as video games and Go. However, realworld deployment of endtoend RL models is less common, as RL models can be very sensitive to slight perturbation of the environment. The robust Markov decision process MDP framework in which the transition probabilities belong to an uncertainty set around a nominal model provides one way to develop robust models. While previous analysis shows RL algorithms are effective assuming access to a generative model, it remains unclear whether RL can be efficient under a more realistic online setting, which requires a careful balance between exploration and exploitation. In this work, we consider online robust MDP by interacting with an unknown nominal system. We propose a robust optimistic policy optimization algorithm that is provably efficient. To address the additional uncertainty caused by an adversarial environment, our model features a new optimistic update rule derived via Fenchel conjugates. Our analysis establishes the first regret bound for online robust MDPs.
A case study of spatiotemporal forecasting techniques for weather forecasting ; The majority of realworld processes are spatiotemporal, and the data generated by them exhibits both spatial and temporal evolution. Weather is one of the most important processes that fall under this domain, and forecasting it has become a crucial part of our daily routine. Weather data analysis is considered the most complex and challenging task. Although numerical weather prediction models are currently stateoftheart, they are resource intensive and timeconsuming. Numerous studies have proposed timeseriesbased models as a viable alternative to numerical forecasts. Recent research has primarily focused on forecasting weather at a specific location. Therefore, models can only capture temporal correlations. This selfcontained paper explores various methods for regional datadriven weather forecasting, i.e., forecasting over multiple latitudelongitude points to capture spatiotemporal correlations. The results showed that spatiotemporal prediction models reduced computational cost while improving accuracy; in particular, the proposed tensor train dynamic mode decompositionbased forecasting model has comparable accuracy to ConvLSTM without the need for training. We use the NASA POWER meteorological dataset to evaluate the models and compare them with the current state of the art.
Dimensions of Higher Order Factor Analysis Models ; The factor analysis model is a statistical model where a certain number of hidden random variables, called factors, affect linearly the behaviour of another set of observed random variables, with additional random noise. The main assumption of the model is that the factors and the noise are Gaussian random variables. This implies that the feasible set lies in the cone of positive semidefinite matrices. In this paper, we do not assume that the factors and the noise are Gaussian, hence the higher order moment and cumulant tensors of the observed variables are generally nonzero. This motivates the notion of kthorder factor analysis model, that is the family of all random vectors in a factor analysis model where the factors and the noise have finite and possibly nonzero moment and cumulant tensors up to order k. This subset may be described as the image of a polynomial map onto a Cartesian product of symmetric tensor spaces. Our goal is to compute its dimension and we provide conditions under which the image has positive codimension.
Neural Graphical Models ; Probabilistic Graphical Models are often used to understand dynamics of a system. They can model relationships between features nodes and the underlying distribution. Theoretically these models can represent very complex dependency functions, but in practice often simplifying assumptions are made due to computational limitations associated with graph operations. In this work we introduce Neural Graphical Models NGMs which attempt to represent complex feature dependencies with reasonable computational costs. Given a graph of feature relationships and corresponding samples, we capture the dependency structure between the features along with their complex function representations by using a neural network as a multitask learning framework. We provide efficient learning, inference and sampling algorithms. NGMs can fit generic graph structures including directed, undirected and mixededge graphs as well as support mixed input data types. We present empirical studies that show NGMs' capability to represent Gaussian graphical models, perform inference analysis of a lung cancer data and extract insights from a real world infant mortality data provided by Centers for Disease Control and Prevention.
Quantum OppenheimerSnyder and Swiss Cheese models ; By considering the quantum OppenheimerSnyder model in loop quantum cosmology, a new quantum black hole model whose metric tensor is a suitably deformed Schwarzschild one is derived. The quantum effects imply a lower bound on the mass of the black hole produced by the collapsing dust ball. For the case of larger masses where the event horizon does form, the maximal extension of the spacetime and its properties are investigated. By discussing the opposite scenario to the quantum OppenheimerSnyder, a quantum Swiss Cheese model is obtained with a bubble surrounded by the quantum universe. This model is analogous to black hole cosmology or fecund universes where the big bang is related to a white hole. Thus our models open a new window to cosmological phenomenology.
Look Ma, Only 400 Samples Revisiting the Effectiveness of Automatic NGram Rule Generation for Spelling Normalization in Filipino ; With 84.75 million Filipinos online, the ability for models to process online text is crucial for developing Filipino NLP applications. To this end, spelling correction is a crucial preprocessing step for downstream processing. However, the lack of data prevents the use of language models for this task. In this paper, we propose an NGram Damerau Levenshtein distance model with automatic rule extraction. We train the model on 300 samples, and show that despite limited training data, it achieves good performance and outperforms other deep learning approaches in terms of accuracy and edit distance. Moreover, the model 1 requires little compute power, 2 trains in little time, thus allowing for retraining, and 3 is easily interpretable, allowing for direct troubleshooting, highlighting the success of traditional approaches over more complex deep learning models in settings where data is unavailable.
Leveraging Instance Features for Label Aggregation in Programmatic Weak Supervision ; Programmatic Weak Supervision PWS has emerged as a widespread paradigm to synthesize training labels efficiently. The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources abstracted as labeling functions LFs. Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process. In this paper, we attempt to incorporate the instance features into a statistical label model via the proposed FABLE. In particular, it is built on a mixture of Bayesian label models, each corresponding to a global pattern of correlation, and the coefficients of the mixture components are predicted by a Gaussian Process classifier based on instance features. We adopt an auxiliary variablebased variational inference algorithm to tackle the nonconjugate issue between the Gaussian Process and Bayesian label models. Extensive empirical comparison on eleven benchmark datasets sees FABLE achieving the highest averaged performance across nine baselines.
The incomplete Analytic Hierarchy Process and BradleyTerry model inconsistency and information retrieval ; Several methods of preference modeling, ranking, voting and multicriteria decision making include pairwise comparisons. It is usually simpler to compare two objects at a time, furthermore, some relations e.g., the outcome of sports matches are naturally known for pairs. This paper investigates and compares pairwise comparison models and the stochastic BradleyTerry model. It is proved that they provide the same priority vectors for consistent complete or incomplete comparisons. For incomplete comparisons, all filling in levels are considered. Recent results identified the optimal subsets and sequences of multiplicativeadditivereciprocal pairwise comparisons for small sizes of items up to n 6. Simulations of this paper show that the same subsets and sequences are optimal in case of the BradleyTerry and the Thurstone models as well. This, somehow surprising, coincidence suggests the existence of a more general result. Further models of information and preference theory are subject to future investigation in order to identify optimal subsets of input data.
CrossAlign Modeling Deep Crosslingual Interactions for Word Alignment ; Word alignment which aims to extract lexicon translation equivalents between source and target sentences, serves as a fundamental tool for natural language processing. Recent studies in this area have yielded substantial improvements by generating alignments from contextualized embeddings of the pretrained multilingual language models. However, we find that the existing approaches capture few interactions between the input sentence pairs, which degrades the word alignment quality severely, especially for the ambiguous words in the monolingual context. To remedy this problem, we propose CrossAlign to model deep interactions between the input sentence pairs, in which the source and target sentences are encoded separately with the shared selfattention modules in the shallow layers, while crosslingual interactions are explicitly constructed by the crossattention modules in the upper layers. Besides, to train our model effectively, we propose a twostage training framework, where the model is trained with a simple Translation Language Modeling TLM objective in the first stage and then finetuned with a selfsupervised alignment objective in the second stage. Experiments show that the proposed CrossAlign achieves the stateoftheart SOTA performance on four out of five language pairs.
Deep learning method in testing the cosmic distance duality relation ; The cosmic distance duality relation DDR is constrained from the combination of typeIa supernovae SNe Ia and strong gravitational lensing SGL systems using deep learning method. To make use of the full SGL data, we reconstruct the luminosity distance from SNe Ia up to the highest redshift of SGL using deep learning, then it is compared with the angular diameter distance obtained from SGL. Considering the influence of lens mass profile, we constrain the possible violation of DDR in three lens mass models. Results show that in the SIS model and EPL model, DDR is violated at high confidence level, with the violation parameter eta00.1930.0210.019 and eta00.2470.0140.013, respectively. In the PL model, however, DDR is verified within 1sigma confidence level, with the violation parameter eta00.0140.0530.045. Our results demonstrate that the constraints on DDR strongly depend on the lens mass models. Given a specific lens mass model, DDR can be constrained at a precision of textitO102 using deep learning.
Renormalization of Supersymmetric Lifshitz Sigma Models ; We study the renormalization of an N 1 supersymmetric Lifshitz sigma model in three dimensions. The sigma model exhibits worldvolume anisotropy in space and time around the highenergy z 2 Lifshitz point, such that the worldvolume is endowed with a foliation structure along a preferred time direction. In curved backgrounds, the targetspace geometry is equipped with two distinct metrics, and the interacting sigma model is powercounting renormalizable. At low energies, the theory naturally flows toward the relativistic sigma model where Lorentz symmetry emerges. In the superspace formalism, we develop a heat kernel method that is covariantized with respect to the bimetric targetspace geometry, using which we evaluate the oneloop betafunctions of the Lifshitz sigma model. This study forms an essential step toward a thorough understanding of the quantum critical supermembrane as a candidate highenergy completion of the relativistic supermembrane.
From Mimicking to Integrating Knowledge Integration for PreTrained Language Models ; Investigating better ways to reuse the released pretrained language models PLMs can significantly reduce the computational cost and the potential environmental sideeffects. This paper explores a novel PLM reuse paradigm, Knowledge Integration KI. Without human annotations available, KI aims to merge the knowledge from different teacherPLMs, each of which specializes in a different classification problem, into a versatile student model. To achieve this, we first derive the correlation between virtual golden supervision and teacher predictions. We then design a Model Uncertaintyaware Knowledge Integration MUKI framework to recover the golden supervision for the student. Specifically, MUKI adopts MonteCarlo Dropout to estimate model uncertainty for the supervision integration. An instancewise reweighting mechanism based on the margin of uncertainty scores is further incorporated, to deal with the potential conflicting supervision from teachers. Experimental results demonstrate that MUKI achieves substantial improvements over baselines on benchmark datasets. Further analysis shows that MUKI can generalize well for merging teacher models with heterogeneous architectures, and even teachers major in crosslingual datasets.
FRW model with Two fluid Source in fractal cosmology ; Present paper deals with flat Friedmann Robertson Walker FRW model with two fluid source in fractal cosmology. In this model one fluid represents matter content of the universe and another fluid is radiation field modeling the cosmic microwave background. To get the deterministic model, we have used the relation between pressure and density for matter through the gamma law equation of state pm gamma1, rhom, ,, 1leq gammaleq2 . The solutions of fractal field equations are obtained in terms of Kummer's confluent hypergeometric function of the first kind. Some physical parameters of the models are obtained and discussed their behavior with the help of graphs in detail.
Estimating Option Pricing Models Using a Characteristic FunctionBased Linear State Space Representation ; We develop a novel filtering and estimation procedure for parametric option pricing models driven by general affine jumpdiffusions. Our procedure is based on the comparison between an optionimplied, modelfree representation of the conditional logcharacteristic function and the modelimplied conditional logcharacteristic function, which is functionally affine in the model's state vector. We formally derive an associated linear state space representation and establish the asymptotic properties of the corresponding measurement errors. The state space representation allows us to use a suitably modified Kalman filtering technique to learn about the latent state vector and a quasimaximum likelihood estimator of the model parameters, which brings important computational advantages. We analyze the finitesample behavior of our procedure in Monte Carlo simulations. The applicability of our procedure is illustrated in two case studies that analyze SP 500 option prices and the impact of exogenous state variables capturing Covid19 reproduction and economic policy uncertainty.
Development of a Simulation Environment for Evaluation of a Forward Looking Sonar System for Small AUVs ; This paper describes a highfidelity sonar model and a simulation environment that implements the model. The model and simulation environment have been developed to aid in the design of a forward looking sonar for autonomous underwater vehicles AUVs. The simulator achieves realtime visualization through ray tracing and approximation. The simulator facilitates the assessment of sonar design choices, such as beam pattern and beam location, and assessment of obstacle detection and tracking algorithms. An obstacle detection model is proposed for which the null hypothesis is estimated from the environmental model. Sonar data is generated from the simulator and compared to the expected results from the detection model demonstrating the benefits and limitations of the proposed approach.
svMorph Interactive geometryediting tools for virtual patientspecific vascular anatomies ; We propose svMorph, a framework for interactive virtual sculpting of patientspecific vascular anatomic models. Our framework includes three tools for the creation of tortuosity, aneurysms, and stenoses in tubular vascular geometries. These shape edits are performed via geometric operations on the surface mesh and vessel centerline curves of the input model. The tortuosity tool also uses the physicsbased Oriented Particles method, coupled with linear blend skinning, to achieve smooth, elasticlike deformations. Our tools can be applied separately or in combination to produce simulationsuitable morphed models. They are also compatible with popular vascular modeling software, such as SimVascular. To illustrate our tools, we morph several imagebased, patientspecific models to create a range of shape changes and simulate the resulting hemodynamics via threedimensional, computational fluid dynamics. We also demonstrate the ability to quickly estimate the hemodynamic effects of the shape changes via automated generation of associated zerodimensional lumpedparameter models.
LeVoice ASR Systems for the ISCSLP 2022 Intelligent Cockpit Speech Recognition Challenge ; This paper describes LeVoice automatic speech recognition systems to track2 of intelligent cockpit speech recognition challenge 2022. Track2 is a speech recognition task without limits on the scope of model size. Our main points include deep learning based speech enhancement, texttospeech based speech generation, training data augmentation via various techniques and speech recognition model fusion. We compared and fused the hybrid architecture and two kinds of endtoend architecture. For endtoend modeling, we used models based on connectionist temporal classificationattentionbased encoderdecoder architecture and recurrent neural network transducerattentionbased encoderdecoder architecture. The performance of these models is evaluated with an additional language model to improve word error rates. As a result, our system achieved 10.2 character error rate on the challenge test set data and ranked third place among the submitted systems in the challenge.
Model of rough surfaces with Gaussian processes ; Surface roughness plays a critical role and has effects in, e.g. fluid dynamics or contact mechanics. For example, to evaluate fluid behavior at different roughness properties, realworld or numerical experiments are performed. Numerical simulations of rough surfaces can speed up these studies because they can help collect more relevant information. However, it is hard to simulate rough surfaces with deterministic or structured components in current methods. In this work, we present a novel approach to simulate rough surfaces with a Gaussian process GP and a noise model because GPs can model structured and periodic elements. GPs generalize traditional methods and are not restricted to stationarity so they can simulate a wider range of rough surfaces. In this paper, we summarize the theoretical similarities of GPs with autoregressive movingaverage processes and introduce a linear process view of GPs. We also show examples of ground and honed surfaces simulated by a predefined model. The proposed method can also be used to fit a model to measurement data of a rough surface. In particular, we demonstrate this to model turned profiles and surfaces that are inherently periodic.
Geometric Design of Micro Scale Volumetric Receiver Using SystemLevel Inputs An Application of SurrogateBased Approach ; Concentrating solar thermal power is an emerging renewable technology with accessible storage options to generate electricity when required. Central receiver systems or solar towers have the highest commercial potential in largescale power plants because of reaching the highest temperature. With the increasing solar chemistry applications and new solar thermal power plants, various receiver designs require in micro or macroscale, in materials, and temperature limits. The purpose of the article is computing the geometry of the receiver in various conditions and provide information during the conceptual design. This paper proposes a surrogatebased design optimization for a microscale volumetric receiver model in the literature. The study includes creating training data using the Latin Hypercube method, training five different surrogate models, surrogate model validation, selection procedure, and surrogatebased design optimization. Selected surrogates have over 98 Rtextsuperscript2 fit and less than 4 root mean square error. In final step, optimization performance compared with the base model. Because of the model complexity, surrogate models reached better objective values in a significantly shorter time.
From ModelBased to ModelFree Learning Building Control for Demand Response ; Gridinteractive building control is a challenging and important problem for reducing carbon emissions, increasing energy efficiency, and supporting the electric power grid. Currently researchers and practitioners are confronted with a choice of control strategies ranging from modelfree purely datadriven to modelbased directly incorporating physical knowledge to hybrid methods that combine data and models. In this work, we identify stateoftheart methods that span this methodological spectrum and evaluate their performance for multizone building HVAC control in the context of three demand response programs. We demonstrate, in this context, that hybrid methods offer many benefits over both purely modelfree and modelbased methods as long as certain requirements are met. In particular, hybrid controllers are relatively sample efficient, fast online, and high accuracy so long as the test case falls within the distribution of training data. Like all datadriven methods, hybrid controllers are still subject to generalization errors when applied to outofsample scenarios. Key takeaways for control strategies are summarized and the developed software framework is opensourced.
Model Predictive Vehicle Yaw Stability Control via Integrated Active Front Wheel Steering and Individual Braking ; Vehicle stability control systems are important components of active safety systems for road transport. The problem of vehicle lateral stability control is addressed in this paper using active front wheel steering and individual braking. Vehicle lateral stability control means keeping the vehicle yaw rate and the vehicle side slip angle in desired values. For this reason, a modelbased controller is designed. The desired yaw rate is obtained from the single track vehicle model and the desired side slip angle is chosen as zero. Controller design consists of two parts, lower and upper controller parts. Upper controller is designed based on Model Predictive Control MPC method. This upper controller changes front wheel steering angles utilizing steerbywire system and also it generates the required control moment for stabilizing the yaw motion of the vehicle. Lower controller is an individual braking algorithm. It determines the individual braking wheel. In this way, the control moment can be applied to the vehicle. The designed controller is tested using the nonlinear single track vehicle model and the higher fidelity CarMaker vehicle model.
Enhancing Tabular Reasoning with Pattern Exploiting Training ; Recent methods based on pretrained language models have exhibited superior performance over tabular tasks e.g., tabular NLI, despite showing inherent problems such as not using the right evidence and inconsistent predictions across inputs while reasoning over the tabular data. In this work, we utilize PatternExploiting Training PET i.e., strategic MLM on pretrained language models to strengthen these tabular reasoning models' preexisting knowledge and reasoning abilities. Our upgraded model exhibits a superior understanding of knowledge facts and tabular reasoning compared to current baselines. Additionally, we demonstrate that such models are more effective for underlying downstream tasks of tabular inference on InfoTabs. Furthermore, we show our model's robustness against adversarial sets generated through various character and word level perturbations.
LMPriors PreTrained Language Models as TaskSpecific Priors ; Particularly in lowdata regimes, an outstanding challenge in machine learning is developing principled techniques for augmenting our models with suitable priors. This is to encourage them to learn in ways that are compatible with our understanding of the world. But in contrast to generic priors such as shrinkage or sparsity, we draw inspiration from the recent successes of largescale language models LMs to construct taskspecific priors distilled from the rich knowledge of LMs. Our method, Language Model Priors LMPriors, incorporates auxiliary natural language metadata about the task such as variable names and descriptions to encourage downstream model outputs to be consistent with the LM's commonsense reasoning based on the metadata. Empirically, we demonstrate that LMPriors improve model performance in settings where such natural language descriptions are available, and perform well on several tasks that benefit from such prior knowledge, such as feature selection, causal inference, and safe reinforcement learning.
Lexical Generalization Improves with Larger Models and Longer Training ; While finetuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap. Excessive utilization of such heuristics can lead to failure on challenging inputs. We analyze the use of lexical overlap heuristics in natural language inference, paraphrase detection, and reading comprehension using a novel contrastive dataset, and find that larger models are much less susceptible to adopting lexical overlap heuristics. We also find that longer training leads models to abandon lexical overlap heuristics. Finally, we provide evidence that the disparity between models size has its source in the pretrained model
On the failure of variational score matching for VAE models ; Score matching SM is a convenient method for training flexible probabilistic models, which is often preferred over the traditional maximumlikelihood ML approach. However, these models are less interpretable than normalized models; as such, training robustness is in general difficult to assess. We present a critical study of existing variational SM objectives, showing catastrophic failure on a wide range of datasets and network architectures. Our theoretical insights on the objectives emerge directly from their equivalent autoencoding losses when optimizing variational autoencoder VAE models. First, we show that in the Fisher autoencoder, SM produces far worse models than maximumlikelihood, and approximate inference by Fisher divergence can lead to lowdensity local optima. However, with important modifications, this objective reduces to a regularized autoencoding loss that resembles the evidence lower bound ELBO. This analysis predicts that the modified SM algorithm should behave very similarly to ELBO on Gaussian VAEs. We then review two other FDbased objectives from the literature and show that they reduce to uninterpretable autoencoding losses, likely leading to poor performance. The experiments verify our theoretical predictions and suggest that only ELBO and the baseline objective robustly produce expected results, while previously proposed SM methods do not.
Cascading Biases Investigating the Effect of Heuristic Annotation Strategies on Data and Models ; Cognitive psychologists have documented that humans use cognitive heuristics, or mental shortcuts, to make quick decisions while expending less effort. While performing annotation work on crowdsourcing platforms, we hypothesize that such heuristic use among annotators cascades on to data quality and model robustness. In this work, we study cognitive heuristic use in the context of annotating multiplechoice reading comprehension datasets. We propose tracking annotator heuristic traces, where we tangibly measure loweffort annotation strategies that could indicate usage of various cognitive heuristics. We find evidence that annotators might be using multiple such heuristics, based on correlations with a battery of psychological tests. Importantly, heuristic use among annotators determines data quality along several dimensions 1 known biased models, such as partial input models, more easily solve examples authored by annotators that rate highly on heuristic use, 2 models trained on annotators scoring highly on heuristic use don't generalize as well, and 3 heuristicseeking annotators tend to create qualitatively less challenging examples. Our findings suggest that tracking heuristic usage among annotators can potentially help with collecting challenging datasets and diagnosing model biases.
New Instantons for Matrix Models ; The complete, nonperturbative content of random matrix models is described by resurgenttransseries general solutions to their corresponding stringequations. These transseries include exponentiallysuppressed multiinstanton amplitudes obtained by eigenvalue tunneling, but they also contain exponentiallyenhanced and mixed instantonlike sectors with no known matrix model interpretation. This work shows how these sectors can be also described by eigenvalue tunneling in matrix models but on the nonphysical sheet of the spectral curve describing their largeN limit. This picture further explains the full resurgence of random matrices via analysis of all possible eigenvalue integrationcontours. How to calculate these anti eigenvaluetunneling amplitudes is explained in detail and in various examples, such as the cubic and quartic matrix models, and their doublescaling limit to Painleve I. This further provides direct matrixmodel derivations of their resurgent Stokes data, which were recently obtained by different techniques.
Model reduction for molecular diffusion in nanoporous media ; Porous materials are widely used for applications in gas storage and separation. The diffusive properties of a variety of gases in porous media can be modeled using molecular dynamics simulations that can be computationally demanding depending on the pore geometry, complexity and amount of gas adsorbed. We explore a dimensionality reduction approach for estimating the selfdiffusion coefficient of gases in simple pores using Langevin dynamics, such that the threedimensional 3D atomistic interactions that determine the diffusion properties of realistic systems can be reduced to an effective onedimensional 1D diffusion problem along the pore axis. We demonstrate the approach by modeling the transport of nitrogen molecules in singlewalled carbon nanotubes of different radii, showing that 1D Langevin models can be parametrized with a few singleparticle 3D atomistic simulations. The reduced 1D model predicts accurate diffusion coefficients over a broad range of temperatures and gas densities. Our work paves the way for studying the diffusion process of more general porous materials as zeolites or metalorganics frameworks with effective models of reduced complexity.
SemiSupervised Learning Based on Reference Model for Lowresource TTS ; Most previous neural texttospeech TTS methods are mainly based on supervised learning methods, which means they depend on a large training dataset and hard to achieve comparable performance under lowresource conditions. To address this issue, we propose a semisupervised learning method for neural TTS in which labeled target data is limited, which can also resolve the problem of exposure bias in the previous autoregressive models. Specifically, we pretrain the reference model based on Fastspeech2 with much source data, finetuned on a limited target dataset. Meanwhile, pseudo labels generated by the original reference model are used to guide the finetuned model's training further, achieve a regularization effect, and reduce the overfitting of the finetuned model during training on the limited target data. Experimental results show that our proposed semisupervised learning scheme with limited target data significantly improves the voice quality for test data to achieve naturalness and robustness in speech synthesis.
Modelling Correlation Matrices in Multivariate Dyadic Data Latent Variable Models for Intergenerational Exchanges of Family Support ; We define a model for the joint distribution of multiple continuous latent variables which includes a model for how their correlations depend on explanatory variables. This is motivated by and applied to social scientific research questions in the analysis of intergenerational help and support within families, where the correlations describe reciprocity of help between generations and complementarity of different kinds of help. We propose an MCMC procedure for estimating the model which maintains the positive definiteness of the implied correlation matrices, and describe theoretical results which justify this approach and facilitate efficient implementation of it. The model is applied to data from the UK Household Longitudinal Study to analyse exchanges of practical and financial support between adult individuals and their noncoresident parents.
Hypergraph Artificial Benchmark for Community Detection hABCD ; The Artificial Benchmark for Community Detection ABCD graph is a recently introduced random graph model with community structure and powerlaw distribution for both degrees and community sizes. The model generates graphs with similar properties as the wellknown LFR one, and its main parameter can be tuned to mimic its counterpart in the LFR model, the mixing parameter. In this paper, we introduce hypergraph counterpart of the ABCD model, hABCD, which produces random hypergraph with distributions of groundtruth community sizes and degrees following powerlaw. As in the original ABCD, the new model hABCD can produce hypergraphs with various levels of noise. More importantly, the model is flexible and can mimic any desired level of homogeneity of hyperedges that fall into one community. As a result, it can be used as a suitable, synthetic playground for analyzing and tuning hypergraph community detection algorithms.
an intelligent security centered resourceefficient resource management model for cloud computing environments ; This paper proposes a conceptual model for a secure and performanceefficient workload management model in cloud environments. In this model, a resource management unit is employed for energy and performance proficient allocation of virtual machines while ensuring the secure processing of users' applications by defending against data breaches due to unauthorized access to virtual machines in realtime. The resource management unit is guided by a secure virtual machine management unit which is designed to generate information regarding unauthorized access or intercommunication links among active virtual machines. Also, a workload analyzer unit operates concurrently to estimate resource utilization information to assist the resource management unit in the performanceefficient allocation of virtual machines. Contrary to prior works which engage access control mechanisms, encryption, and decryption of data before the transfer and the use of tunneling for prevention of unauthorized access to virtual machines which raises excess computational cost overhead, the proposed model operates diversely for efficiently serving the same purpose.
Accelerating Cosmological Models in fT, B Gravitational Theory ; In this paper, we have explored the field equations of fT, B gravity as an extension of teleparallel gravity in an isotropic and homogeneous space time. In the basic formalism developed, the dynamical parameters are derived by incorporating the power law and exponential scale factor function. The models are showing accelerating behaviour and approaches to LambdaCDM at late time.The present value of the equation of state parameter for both the cases are obtained to be in accordance with the range provided by cosmological observations. The geometrical parameters and the scalar field reconstruction are performed to assess the viability of a late time accelerating Universe. Further the stability of both the models are presented. It has been observed that both the models are parameters dependent. Since most of the geometrically modified theories of gravity are favouring the violation of strong energy condition, we have derived the energy conditions both for the power law and exponential model. In both the models, the violation of strong energy condition established.
Modelbased Reinforcement Learning with a Hamiltonian Canonical ODE Network ; Modelbased reinforcement learning usually suffers from a high sample complexity in training the world model, especially for the environments with complex dynamics. To make the training for general physical environments more efficient, we introduce Hamiltonian canonical ordinary differential equations into the learning process, which inspires a novel model of neural ordinary differential autoencoder NODA. NODA can model the physical world by nature and is flexible to impose Hamiltonian mechanics e.g., the dimension of the physical equations which can further accelerate training of the environment models. It can consequentially empower an RL agent with the robust extrapolation using a small amount of samples as well as the guarantee on the physical plausibility. Theoretically, we prove that NODA has uniform bounds for multistep transition errors and value errors under certain conditions. Extensive experiments show that NODA can learn the environment dynamics effectively with a high sample efficiency, making it possible to facilitate reinforcement learning agents at the early stage.
Scaling up the selfoptimization model by means of onthefly computation of weights ; The SelfOptimization SO model is a useful computational model for investigating selforganization in soft Artificial life ALife as it has been shown to be general enough to model various complex adaptive systems. So far, existing work has been done on relatively small network sizes, precluding the investigation of novel phenomena that might emerge from the complexity arising from large numbers of nodes interacting in interconnected networks. This work introduces a novel implementation of the SO model that scales as mathcalOleftN2right with respect to the number of nodes N, and demonstrates the applicability of the SO model to networks with system sizes several orders of magnitude higher than previously was investigated. Removing the prohibitive computational cost of the naive mathcalOleftN3right algorithm, our onthefly computation paves the way for investigating substantially larger system sizes, allowing for more variety and complexity in future studies.
Overcoming Barriers to Skill Injection in Language Modeling Case Study in Arithmetic ; Through their transfer learning abilities, highlyparameterized large pretrained language models have dominated the NLP landscape for a multitude of downstream language tasks. Though linguistically proficient, the inability of these models to incorporate the learning of nonlinguistic entities numerals and arithmetic reasoning limits their usage for tasks that require numeric comprehension or strict mathematical reasoning. However, as we illustrate in this paper, building a general purpose language model that also happens to be proficient in mathematical reasoning is not as straightforward as training it on a numeric dataset. In this work, we develop a novel framework that enables language models to be mathematically proficient while retaining their linguistic prowess. Specifically, we offer informationtheoretic interventions to overcome the catastrophic forgetting of linguistic skills that occurs while injecting nonlinguistic skills into language models.
Cold Diffusion for Speech Enhancement ; Diffusion models have recently shown promising results for difficult enhancement tasks such as the conditional and unconditional restoration of natural images and audio signals. In this work, we explore the possibility of leveraging a recently proposed advanced iterative diffusion model, namely cold diffusion, to recover clean speech signals from noisy signals. The unique mathematical properties of the sampling process from cold diffusion could be utilized to restore highquality samples from arbitrary degradations. Based on these properties, we propose an improved training algorithm and objective to help the model generalize better during the sampling process. We verify our proposed framework by investigating two model architectures. Experimental results on benchmark speech enhancement dataset VoiceBankDEMAND demonstrate the strong performance of the proposed approach compared to representative discriminative models and diffusionbased enhancement models.
Modeling Temporal Data as Continuous Functions with Stochastic Process Diffusion ; Temporal data such as time series can be viewed as discretized measurements of the underlying function. To build a generative model for such data we have to model the stochastic process that governs it. We propose a solution by defining the denoising diffusion model in the function space which also allows us to naturally handle irregularlysampled observations. The forward process gradually adds noise to functions, preserving their continuity, while the learned reverse process removes the noise and returns functions as new samples. To this end, we define suitable noise sources and introduce novel denoising and scorematching models. We show how our method can be used for multivariate probabilistic forecasting and imputation, and how our model can be interpreted as a neural process.
An efficient electrostatic embedding QMMM method using periodic boundary conditions based on particlemesh Ewald sums and electrostatic potential fitted charge operators ; Hybrid quantum mechanics molecular mechanics QMMM models successfully describe the properties of biological macromolecules. However, most QMMM methodologies are constrained to unrealistic gas phase models, thus limiting their applicability. In the literature, several works have attempted to define a QMMM model in periodic boundary conditions PBC but frequently the models are too timeconsuming for general applicability to biological systems in solution. Here, we define a simple and efficient electrostatic embedding QMMM model in PBC combining the benefits of electrostatic potential fitted ESPF atomic charges and particlemesh Ewald sums, that can efficiently treat systems of arbitrary size at a reasonable computational cost. To illustrate this, we apply our scheme to extract the lowest singlet excitation energies from a model for arabidopsis thaliana cryptochrome 1 containing circa 93000 atoms, reproducing accurately the experimental absorption maximum.
StrongLensing Source Reconstruction with Denoising Diffusion Restoration Models ; Analysis of galaxygalaxy strong lensing systems is strongly dependent on any prior assumptions made about the appearance of the source. Here we present a method of imposing a datadriven prior regularisation for source galaxies based on denoising diffusion probabilistic models DDPMs. We use a pretrained model for galaxy images, AstroDDPM, and a chain of conditional reconstruction steps called denoising diffusion reconstruction model DDRM to obtain samples consistent both with the noisy observation and with the distribution of training data for AstroDDPM. We show that these samples have the qualitative properties associated with the posterior for the source model in a lowtomedium noise scenario they closely resemble the observation, while reconstructions from uncertain data show greater variability, consistent with the distribution encoded in the generative model used as prior.
Bouncing Universe in Loop Quantum Gravity full theory calculation ; In Loop Quantum Gravity mathematically rigorous models of full quantum gravity were proposed. In this paper we study a cosmological sector of one of the models describing quantum gravity with positive cosmological constant coupled to massless scalar field. In our previous research we introduced a method to reduce the model to homogeneousisotropic sector at the quantum level. In this paper we propose a method to restrict to the spatially flat sector. After this restriction the number of degrees of freedom gets substantially reduced. This allows us to make numerical calculations. Remarkably, the resulting model shares some structural similarities with the Loop Quantum Cosmological models and therefore sheds some new light on the relation between Loop Quantum Gravity and Loop Quantum Cosmology. According to our model the evolution of the Universe is periodic. The quantum gravity effects resolve the Big Bang singularity leading to a Big Bounce and cause the Universe to contract after a classical expansion phase Big Crunch.
Bayesian Networks for the robust and unbiased prediction of depression and its symptoms utilizing speech and multimodal data ; Predicting the presence of major depressive disorder MDD using behavioural and cognitive signals is a highly nontrivial task. The heterogeneous clinical profile of MDD means that any given speech, facial expression andor observed cognitive pattern may be associated with a unique combination of depressive symptoms. Conventional discriminative machine learning models potentially lack the complexity to robustly model this heterogeneity. Bayesian networks, however, may instead be wellsuited to such a scenario. These networks are probabilistic graphical models that efficiently describe the joint probability distribution over a set of random variables by explicitly capturing their conditional dependencies. This framework provides further advantages over standard discriminative modelling by offering the possibility to incorporate expert opinion in the graphical structure of the models, generating explainable model predictions, informing about the uncertainty of predictions, and naturally handling missing data. In this study, we apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
Clustering of countries based on the associated social contact patterns in epidemiological modelling ; Mathematical models have been used to understand the spread patterns of infectious diseases such as Coronavirus Disease 2019 COVID19. The transmission component of the models can be modelled in an agedependent manner via introducing contact matrix for the population, which describes the contact rates between the age groups. Since social contact patterns vary from country to country, we can compare and group the countries using the corresponding contact matrices. In this paper, we present a framework for clustering countries based on their contact matrices with respect to an underlying epidemic model. Since the pipeline is generic and modular, we demonstrate its application in a COVID19 model from Rost et. al. which gives a hint about which countries can be compared in a pandemic situation, when only nonpharmaceutical interventions are available.
Energy Storage Price Arbitrage via Opportunity Value Function Prediction ; This paper proposes a novel energy storage price arbitrage algorithm combining supervised learning with dynamic programming. The proposed approach uses a neural network to directly predicts the opportunity cost at different energy storage stateofcharge levels, and then input the predicted opportunity cost into a modelbased arbitrage control algorithm for optimal decisions. We generate the historical optimal opportunity value function using price data and a dynamic programming algorithm, then use it as the ground truth and historical price as predictors to train the opportunity value function prediction model. Our method achieves 65 to 90 profit compared to perfect foresight in case studies using different energy storage models and price data from New York State, which significantly outperforms existing modelbased and learningbased methods. While guaranteeing high profitability, the algorithm is also lightweighted and can be trained and implemented with minimal computational cost. Our results also show that the learned prediction model has excellent transferability. The prediction model trained using price data from one region also provides good arbitrage results when tested over other regions.
A Zoo of Deformed JackiwTeitelboim Models near Large Dimensional Black Holes ; We consider a charged Lifshitz black hole in the large transverse dimension limit. In this setup, the dynamics near the black hole horizon are shown to be effectively governed by a family of twodimensional models of dilaton gravity depending on the ratio of the dynamical parameter characterizing the black hole and the dimension of spacetime. This family includes the CallanGiddingsHarveyStrominger CGHS and JackiwTeitelboim JT models and their charged equivalents. This family also contains classes of asymptotically antide Sitter models beyond JT, characterized by a running Ricci scalar, with the option of adding charge. Finally, we argue that specific nonminimally coupled probe scalars in the parent Lifshitz model become minimally coupled scalars in the twodimensional theory, which is relevant for understanding semiclassical corrections in such models.
Multilingual Speech Emotion Recognition With MultiGating Mechanism and Neural Architecture Search ; Speech emotion recognition SER classifies audio into emotion categories such as Happy, Angry, Fear, Disgust and Neutral. While Speech Emotion Recognition SER is a common application for popular languages, it continues to be a problem for lowresourced languages, i.e., languages with no pretrained speechtotext recognition models. This paper firstly proposes a languagespecific model that extract emotional information from multiple pretrained speech models, and then designs a multidomain model that simultaneously performs SER for various languages. Our multidomain model employs a multigating mechanism to generate unique weighted feature combination for each language, and also searches for specific neural network structure for each language through a neural architecture search module. In addition, we introduce a contrastive auxiliary loss to build more separable representations for audio data. Our experiments show that our model raises the stateoftheart accuracy by 3 for German and 14.3 for French.
Identifying Spurious Correlations and Correcting them with an Explanationbased Learning ; Identifying spurious correlations learned by a trained model is at the core of refining a trained model and building a trustworthy model. We present a simple method to identify spurious correlations that have been learned by a model trained for image classification problems. We apply imagelevel perturbations and monitor changes in certainties of predictions made using the trained model. We demonstrate this approach using an image classification dataset that contains images with synthetically generated spurious regions and show that the trained model was overdependent on spurious regions. Moreover, we remove the learned spurious correlations with an explanation based learning approach.
Bayesian Nonparametric Erlang Mixture Modeling for Survival Analysis ; We develop a flexible Erlang mixture model for survival analysis. The model for the survival density is built from a structured mixture of Erlang densities, mixing on the integer shape parameter with a common scale parameter. The mixture weights are constructed through increments of a distribution function on the positive real line, which is assigned a Dirichlet process prior. The model has a relatively simple structure, balancing flexibility with efficient posterior computation. Moreover, it implies a mixture representation for the hazard function that involves timedependent mixture weights, thus offering a general approach to hazard estimation. We extend the model to handle survival responses corresponding to multiple experimental groups, using a dependent Dirichlet process prior for the groupspecific distributions that define the mixture weights. Model properties, prior specification, and posterior simulation are discussed, and the methodology is illustrated with synthetic and real data examples.
The Naval Seafloor Evolution Architecture A Platform for Predicting Dynamic Seafloor Roughness ; Predicting the temporal and spatial dynamics of seafloor roughness is important for understanding bottom boundary layer hydrodynamics. The Navy Seafloor Evolution Architecture NSEA is a platform for modeling the dynamic nature of the seafloor by combining hydrodynamic forcing information and observations from diverse sources. NSEA's three modules include a specification of hydrodynamic forcing, a seafloor evolution model, and a model to generate roughness realizations. It can be run in forward mode to predict seafloor roughness including the uncertainty from forcing information, or in inverse mode to estimate parameters from observed seafloor roughness. The model is demonstrated and shown to have good agreement with a field dataset of observed seafloor roughness. Similarly running in inverse mode, NSEA was demonstrated to predict the observed mean sediment grain size with good agreement. NSEA's modularity allows for a wide range of applications in hydrodynamic and acoustic modeling, and is built within an expandable framework that lends for coupling to such models with minimal effort.
Knowledge distillation for fast and accurate DNA sequence correction ; Accurate genome sequencing can improve our understanding of biology and the genetic basis of disease. The standard approach for generating DNA sequences from PacBio instruments relies on HMMbased models. Here, we introduce Distilled DeepConsensus a distilled transformerencoder model for sequence correction, which improves upon the HMMbased methods with runtime constraints in mind. Distilled DeepConsensus is 1.3x faster and 1.5x smaller than its larger counterpart while improving the yield of high quality reads Q30 over the HMMbased method by 1.69x vs. 1.73x for larger model. With improved accuracy of genomic sequences, Distilled DeepConsensus improves downstream applications of genomic sequence analysis such as reducing variant calling errors by 39 34 for larger model and improving genome assembly quality by 3.8 4.2 for larger model. We show that the representations learned by Distilled DeepConsensus are similar between faster and slower models.
Quantifying the Individual Differences of Driver' Risk Perception with Just Four Interpretable Parameters ; There will be a long time when automated vehicles are mixed with humandriven vehicles. Understanding how drivers assess driving risks and modelling their individual differences are significant for automated vehicles to develop humanlike and customized behaviors, so as to gain people's trust and acceptance. However, the reality is that existing driving risk models are developed at a statistical level, and no one scenariouniversal driving risk measure can correctly describe risk perception differences among drivers. We proposed a concise yet effective model, called Potential Damage Risk PODAR model, which provides a universal and physically meaningful structure for driving risk estimation and is suitable for general noncollision and collision scenes. In this paper, based on an openaccessed dataset collected from an obstacle avoidance experiment, four physicalinterpretable parameters in PODAR, including prediction horizon, damage scale, temporal attenuation, and spatial attention, are calibrated and consequently individual risk perception models are established for each driver. The results prove the capacity and potential of PODAR to model individual differences in perceived driving risk, laying the foundation for autonomous driving to develop humanlike behaviors.
New oneparametric extension of the Starobinsky inflationary model ; We propose a oneparametric extension of the Starobinsky RR2 model by adding the Rm2beta232 term. The parameter m is the inflaton mass, which is determined in the same way as in the Starobinsky model, and beta is a dimensionless constant. Using the Einstein frame and the scalar field potential, we get the inflationary parameters of the model proposed. The value of the tensortoscalar ratio r can be significantly larger than in the Starobinsky model. The considered inflationary model is in a good agreement with the current observational data. The corresponding scalar field potential is a polynomial of the exponential function.
Instancespecific and Modeladaptive Supervision for Semisupervised Semantic Segmentation ; Recently, semisupervised semantic segmentation has achieved promising performance with a small fraction of labeled data. However, most existing studies treat all unlabeled data equally and barely consider the differences and training difficulties among unlabeled instances. Differentiating unlabeled instances can promote instancespecific supervision to adapt to the model's evolution dynamically. In this paper, we emphasize the cruciality of instance differences and propose an instancespecific and modeladaptive supervision for semisupervised semantic segmentation, named iMAS. Relying on the model's performance, iMAS employs a classweighted symmetric intersectionoverunion to evaluate quantitative hardness of each unlabeled instance and supervises the training on unlabeled data in a modeladaptive manner. Specifically, iMAS learns from unlabeled instances progressively by weighing their corresponding consistency losses based on the evaluated hardness. Besides, iMAS dynamically adjusts the augmentation for each instance such that the distortion degree of augmented instances is adapted to the model's generalization capability across the training course. Not integrating additional losses and training procedures, iMAS can obtain remarkable performance gains against current stateoftheart approaches on segmentation benchmarks under different semisupervised partition protocols.
Distilling Knowledge from SelfSupervised Teacher by Embedding Graph Alignment ; Recent advances have indicated the strengths of selfsupervised pretraining for improving representation learning on downstream tasks. Existing works often utilize selfsupervised pretrained models by finetuning on downstream tasks. However, finetuning does not generalize to the case when one needs to build a customized model architecture different from the selfsupervised model. In this work, we formulate a new knowledge distillation framework to transfer the knowledge from selfsupervised pretrained models to any other student network by a novel approach named Embedding Graph Alignment. Specifically, inspired by the spirit of instance discrimination in selfsupervised learning, we model the instanceinstance relations by a graph formulation in the feature embedding space and distill the selfsupervised teacher knowledge to a student network by aligning the teacher graph and the student graph. Our distillation scheme can be flexibly applied to transfer the selfsupervised knowledge to enhance representation learning on various student networks. We demonstrate that our model outperforms multiple representative knowledge distillation methods on three benchmark datasets, including CIFAR100, STL10, and TinyImageNet. Code is here httpsgithub.comyccmEGA.
Bivariate logsymmetric models distributional properties, parameter estimation and an application to fatigue data analysis ; The bivariate Gaussian distribution has been a key model for many developments in statistics. However, many realworld phenomena generate data that follow asymmetric distributions, and consequently bivariate normal model is inappropriate in such situations. Bidimensional logsymmetric models have attractive properties and can be considered as good alternatives in these cases. In this paper, we discuss bivariate logsymmetric distributions and their characterizations. We establish several distributional properties and obtain the maximum likelihood estimators of the model parameters. A Monte Carlo simulation study is performed for examining the performance of the developed parameter estimation method. A real data set is finally analyzed to illustrate the proposed model and the associated inferential method.
Distinguishing representational geometries with controversial stimuli Bayesian experimental design and its application to face dissimilarity judgments ; Comparing representations of complex stimuli in neural network layers to human brain representations or behavioral judgments can guide model development. However, even qualitatively distinct neural network models often predict similar representational geometries of typical stimulus sets. We propose a Bayesian experimental design approach to synthesizing stimulus sets for adjudicating among representational models efficiently. We apply our method to discriminate among candidate neural network models of behavioral face dissimilarity judgments. Our results indicate that a neural network trained to invert a 3Dfacemodel graphics renderer is more humanaligned than the same architecture trained on identification, classification, or autoencoding. Our proposed stimulus synthesis objective is generally applicable to designing experiments to be analyzed by representational similarity analysis for model comparison.
Continuous diffusion for categorical data ; Diffusion models have quickly become the goto paradigm for generative modelling of perceptual signals such as images and sound through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusioninspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
Triadic Temporal Exponential Random Graph Models TTERGM ; Temporal exponential random graph models TERGM are powerful statistical models that can be used to infer the temporal pattern of edge formation and elimination in complex networks e.g., social networks. TERGMs can also be used in a generative capacity to predict longitudinal time series data in these evolving graphs. However, parameter estimation within this framework fails to capture many realworld properties of social networks, including triadic relationships, small world characteristics, and social learning theories which could be used to constrain the probabilistic estimation of dyadic covariates. Here, we propose triadic temporal exponential random graph models TTERGM to fill this void, which includes these hierarchical network relationships within the graph model. We represent social network learning theory as an additional probability distribution that optimizes Markov chains in the graph vector space. The new parameters are then approximated via Monte Carlo maximum likelihood estimation. We show that our TTERGM model achieves improved fidelity and more accurate predictions compared to several benchmark methods on GitHub network data.
Linear Causal Disentanglement via Interventions ; Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement that accurately recovers a latent causal model.
Modeling animal contests based on spatiotemporal dynamics ; We present a general theoretical model for the spatiotemporal dynamics of animal contests. Inspired by interactions between physical particles, the model is formulated in terms of effective interaction potentials, which map typical elements of contest behaviour into empirically verifiable rules of contestant motion. This allows us to simulate the observable dynamics of contests in various realistic scenarios, notably in dyadic contests over a localized resource. Assessment strategies previously formulated in gametheoretic models, as well as the effects of fighting costs, can be described as variations in our model's parameters. Furthermore, the trends of contest duration associated with these assessment strategies can be derived and understood within the model. Detailed description of the contestants' motion enables the exploration of spatiotemporal properties of asymmetric contests, such as the emergence of chase dynamics. Overall, our framework aims to bridge the growing gap between empirical capabilities and theory in this widespread aspect of animal behaviour.