text
stringlengths
62
2.94k
Revisiting Pretrained Language Models and their Evaluation for Arabic Natural Language Understanding ; There is a growing body of work in recent years to develop pretrained language models PLMs for the Arabic language. This work concerns addressing two major problems in existing Arabic PLMs which constraint progress of the Arabic NLU and NLG fields.First, existing Arabic PLMs are not wellexplored and their pretrainig can be improved significantly using a more methodical approach. Second, there is a lack of systematic and reproducible evaluation of these models in the literature. In this work, we revisit both the pretraining and evaluation of Arabic PLMs. In terms of pretraining, we explore improving Arabic LMs from three perspectives quality of the pretraining data, size of the model, and incorporating characterlevel information. As a result, we release three new Arabic BERTstyle models JABER, CharJABER, and SABER, and two T5style models AT5S and AT5B. In terms of evaluation, we conduct a comprehensive empirical study to systematically evaluate the performance of existing stateoftheart models on ALUE that is a leaderboardpowered benchmark for Arabic NLU tasks, and on a subset of the ARGEN benchmark for Arabic NLG tasks. We show that our models significantly outperform existing Arabic PLMs and achieve a new stateoftheart performance on discriminative and generative Arabic NLU and NLG tasks. Our models and source code to reproduce of results will be made available shortly.
Towards automatic detection of wildlife trade using machine vision models ; Unsustainable trade in wildlife is one of the major threats affecting the global biodiversity crisis. An important part of the trade now occurs on the internet, especially on digital marketplaces and social media. Automated methods to identify trade posts are needed as resources for conservation are limited. Here, we developed machine vision models based on Deep Neural Networks with the aim to automatically identify images of exotic pet animals for sale. A new training dataset representing exotic pet animals advertised for sale on the web was generated for this purpose. We trained 24 neuralnet models spanning a combination of five different architectures, three methods of training and two types of datasets. Specifically, model generalisation improved after setting a portion of the training images to represent negative features. Models were evaluated on both within and out of distribution data to test wider model applicability. The top performing models achieved an fscore of over 0.95 on within distribution evaluation and between 0.75 to 0.87 on the two out of distribution datasets. Notably, feature visualisation indicated that models performed well in detecting the surrounding context e.g. a cage in which an animal was located, therefore helping to automatically detect images of animals in nonnatural environments. The proposed methods can help investigate the online wildlife trade, but can also be adapted to study other types of peoplenature interactions from digital platforms. Future studies can use these findings to build robust machine learning models and new data collection pipelines for more taxonomic groups.
Factorized Structured Regression for LargeScale Varying Coefficient Models ; Recommender Systems RS pervade many aspects of our everyday digital life. Proposed to work at scale, stateoftheart RS allow the modeling of thousands of interactions and facilitate highly individualized recommendations. Conceptually, many RS can be viewed as instances of statistical regression models that incorporate complex feature effects and potentially nonGaussian outcomes. Such structured regression models, including timeaware varying coefficients models, are, however, limited in their applicability to categorical effects and inclusion of a large number of interactions. Here, we propose Factorized Structured Regression FaStR for scalable varying coefficient models. FaStR overcomes limitations of general regression models for largescale data by combining structured additive regression and factorization approaches in a neural networkbased model implementation. This fusion provides a scalable framework for the estimation of statistical models in previously infeasible data settings. Empirical results confirm that the estimation of varying coefficients of our approach is on par with stateoftheart regression techniques, while scaling notably better and also being competitive with other timeaware RS in terms of prediction performance. We illustrate FaStR's performance and interpretability on a largescale behavioral study with smartphone user data.
EndtoEnd Learning of Hybrid Inverse Dynamics Models for Precise and Compliant Impedance Control ; It is wellknown that inverse dynamics models can improve tracking performance in robot control. These models need to precisely capture the robot dynamics, which consist of wellunderstood components, e.g., rigid body dynamics, and effects that remain challenging to capture, e.g., stickslip friction and mechanical flexibilities. Such effects exhibit hysteresis and partial observability, rendering them, particularly challenging to model. Hence, hybrid models, which combine a physical prior with datadriven approaches are especially wellsuited in this setting. We present a novel hybrid model formulation that enables us to identify fully physically consistent inertial parameters of a rigid body dynamics model which is paired with a recurrent neural network architecture, allowing us to capture unmodeled partially observable effects using the network memory. We compare our approach against stateoftheart inverse dynamics models on a 7 degree of freedom manipulator. Using data sets obtained through an optimal experiment design approach, we study the accuracy of offline torque prediction and generalization capabilities of joint learning methods. In control experiments on the real system, we evaluate the model as a feedforward term for impedance control and show the feedback gains can be drastically reduced to achieve a given tracking accuracy.
A hybridmodel approach for reducing the performance gap in building energy forecasting ; The performance gap between predicted and actual energy consumption in the building domain remains an unsolved problem in practice. The gap exists differently in both current mainstream methods the firstprinciples model and the machine learning ML model. Inspired by the concept of timeseries decomposition to identify different uncertainties, we proposed a hybridmodel approach by combining both methods to minimize this gap 1. Use the firstprinciples method as an encoding tool to convert the building static features and predictable patterns in timeseries simulation results; 2. The ML method combines the results as extra inputs with historical records simultaneously, trains the model to capture the implicit performance difference, and aligns to calibrate the output. To extend this approach in practice, a new concept in the modeling process LevelofInformation LOI, is introduced to leverage the balance between the investment of simulation modeling detail and the accuracy boost. The approach is tested over a threeyear period, with hourly measured energy load from an operating commercial building in Shanghai. The result presents a dominant accuracy enhancement The hybridmodel shows higher accuracy in prediction with better interpretability; More important, it releases the practitioners from modeling workload and computational resources in refining simulation. In summary, the approach provides a nexus for integrating domain knowledge via building simulation with datadriven methods. This mindset applies to solving general engineering problems and leads to improved prediction accuracy. The result and source data are available at httpsgithub.comResearchGroupGPerformanceGapHybridApproach.
Sequential Bayesian Neural Subnetwork Ensembles ; Deep neural network ensembles that appeal to model diversity have been used successfully to improve predictive performance and model robustness in several applications. Whereas, it has recently been shown that sparse subnetworks of dense models can match the performance of their dense counterparts and increase their robustness while effectively decreasing the model complexity. However, most ensembling techniques require multiple parallel and costly evaluations and have been proposed primarily with deterministic models, whereas sparsity induction has been mostly done through adhoc pruning. We propose sequential ensembling of dynamic Bayesian neural subnetworks that systematically reduce model complexity through sparsityinducing priors and generate diverse ensembles in a single forward pass of the model. The ensembling strategy consists of an exploration phase that finds highperforming regions of the parameter space and multiple exploitation phases that effectively exploit the compactness of the sparse model to quickly converge to different minima in the energy landscape corresponding to highperforming subnetworks yielding diverse ensembles. We empirically demonstrate that our proposed approach surpasses the baselines of the dense frequentist and Bayesian ensemble models in prediction accuracy, uncertainty estimation, and outofdistribution OoD robustness on CIFAR10, CIFAR100 datasets, and their outofdistribution variants CIFAR10C, CIFAR100C induced by corruptions. Furthermore, we found that our approach produced the most diverse ensembles compared to the approaches with a single forward pass and even compared to the approaches with multiple forward passes in some cases.
Uncertainty Estimation in Machine Learning ; Most machine learning techniques are based upon statistical learning theory, often simplified for the sake of computing speed. This paper is focused on the uncertainty aspect of mathematical modeling in machine learning. Regression analysis is chosen to further investigate the evaluation aspect of uncertainty in model coefficients and, more importantly, in the output feature value predictions. A survey demonstrates major stages in the conventional least squares approach to the creation of the regression model, along with its uncertainty estimation. On the other hand, it is shown that in machine learning the model complexity and severe nonlinearity become serious obstacles to uncertainty evaluation. Furthermore, the process of machine model training demands high computing power, not available at the level of personal computers. This is why socalled pretrained models are widely used in such areas of machine learning as natural language processing. The latest example of a pretrained model is the Generative Pretrained Transformer 3 with hundreds of billions of parameters and a halfterabyte training dataset. Similarly, mathematical models built from real data are growing in complexity which is accompanied by the growing amount of training data. However, when machine models and their predictions are used in decisionmaking, one needs to estimate uncertainty and evaluate accompanying risks. This problem could be resolved with nonparametric techniques at the expense of greater demand for computing power, which can be offered by modern supercomputers available, including those utilizing graphical and tensor processing units along with the conventional central processors.
Spatiotemporal Downscaling Emulator for Regional Climate Models a Comparative Study ; Regional Climate Models RCM describe the meso scale global atmospheric and oceanic dynamics and serve as dynamical downscaling models. In other words, RCMs use atmospheric and oceanic climate output from General Circulation Models GCM to develop a higher resolution climate output. They are computationally demanding and, depending on the application, require several orders of magnitude of computer time more than statistical climate downscaling. In this paper we describe how to use a spatiotemporal statistical model with varying coefficients VC, as a downscaling emulator for a RCM using varying coefficients. In order to estimate the proposed model, two options are compared INLA, and varycoef. We set up a simulation to compare the performance of both methods for building a statistical downscaling emulator for RCM, and then show that the emulator works properly for NARCCAP data. The results show that the model is able to estimate nonstationary marginal effects, which means that the downscaling output can vary over space. Furthermore, the model has flexibility to estimate the mean of any variable in space and time, and has good prediction results. INLA was the fastest method for all the cases, and the approximation with best accuracy to estimate the different parameters from the model and the posterior distribution of the response variable.
Sequential Density Estimation via Nonlinear Continuous Weighted Finite Automata ; Weighted finite automata WFAs have been widely applied in many fields. One of the classic problems for WFAs is probability distribution estimation over sequences of discrete symbols. Although WFAs have been extended to deal with continuous input data, namely continuous WFAs CWFAs, it is still unclear how to approximate density functions over sequences of continuous random variables using WFAbased models, due to the limitation on the expressiveness of the model as well as the tractability of approximating density functions via CWFAs. In this paper, we propose a nonlinear extension to the CWFA model to first improve its expressiveness, we refer to it as the nonlinear continuous WFAs NCWFAs. Then we leverage the socalled RNADE method, which is a wellknown density estimator based on neural networks, and propose the RNADENCWFA model. The RNADENCWFA model computes a density function by design. We show that this model is strictly more expressive than the Gaussian HMM model, which CWFA cannot approximate. Empirically, we conduct a synthetic experiment using Gaussian HMM generated data. We focus on evaluating the model's ability to estimate densities for sequences of varying lengths longer length than the training data. We observe that our model performs the best among the compared baseline methods.
Machine learning based surrogate modeling with SVD enabled training for nonlinear civil structures subject to dynamic loading ; The computationally expensive estimation of engineering demand parameters EDPs via finite element FE models, while considering earthquake and parameter uncertainty limits the use of the Performance Based Earthquake Engineering framework. Attempts have been made to substitute FE models with surrogate models, however, most of these models are a function of building parameters only. This necessitates retraining for earthquakes not previously seen by the surrogate. In this paper, the authors propose a machine learning based surrogate model framework, which considers both these uncertainties in order to predict for unseen earthquakes. Accordingly,earthquakes are characterized by their projections on an orthonormal basis, computed using SVD of a representative ground motion suite. This enables one to generate large varieties of earthquakes by randomly sampling these weights and multiplying them with the basis. The weights along with the constitutive parameters serve as inputs to a machine learning model with EDPs as the desired output. Four competing machine learning models were tested and it was observed that a deep neural network DNN gave the most accurate prediction. The framework is validated by using it to successfully predict the peak response of onestory and threestory buildings represented using stick models, subjected to unseen farfield ground motions.
Turning a Curse into a Blessing Enabling InDistributionDataFree Backdoor Removal via Stabilized Model Inversion ; Many backdoor removal techniques in machine learning models require clean indistribution data, which may not always be available due to proprietary datasets. Model inversion techniques, often considered privacy threats, can reconstruct realistic training samples, potentially eliminating the need for indistribution data. Prior attempts to combine backdoor removal and model inversion yielded limited results. Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples' properties, perceptual similarity, and the potential presence of backdoor triggers. We establish that relying solely on perceptual similarity is insufficient for robust defenses, and the stability of model predictions in response to input and parameter perturbations is also crucial. To tackle this, we introduce a novel bilevel optimizationbased framework for model inversion, promoting stability and visual quality. Interestingly, we discover that reconstructed samples from a pretrained generator's latent space are backdoorfree, even when utilizing signals from a backdoored model. We provide a theoretical analysis to support this finding. Our evaluation demonstrates that our stabilized model inversion technique achieves stateoftheart backdoor removal performance without clean indistribution data, matching or surpassing performance using the same amount of clean samples.
Evaluating the blastwave model as a description of 5 TeV pPb bf pt spectra ; The blastwave BW spectrum model is interpreted to reveal relativistic motion collective flow of the hadron emission system relative to the centerofmomentum CM frame in highenergy AB collisions. In essence, any spectrum deviation in the CM frame from a reference distribution e.g. Boltzmann distribution on transverse mass mt is interpreted to reveal a flowing particle source. The ALICE collaboration has applied the BW model to identified hadron PID spectra for four hadron species from 5 TeV pPb collisions. From model fits BW parameters Tkin freezeout temperature and langle betat rangle transverse speed are inferred that suggest strong radial expansion in morecentral pPb collisions. Such results from the small pPb collision system are counterintuitive given that strong radial expansion should be driven by large density gradients. The present study is intended to address that problem. Several methods are employed to evaluate the quality of the BW model data description, including logarithmic derivatives and the Zscore statistic. The stability of the BW model definition across several applications to data is investigated. The BW model data description is compare to that of the twocomponent softhard model TCM that has been previously applied to the same pPb PID spectra. The general conclusion is that the BW model is falsified by pPb PID spectrum data according to standard statistical measures and that the fitted parameter values do not convey the intended meaning. Statistically acceptable data descriptions provided by the TCM indicate that other collision mechanisms projectilenucleon dissociation, dijet production, that are consistent with conventional QCD, are more likely responsible for observed spectrum characteristics.
Understanding Robustness Lottery A Comparative Visual Analysis of Neural Network Pruning Approaches ; Deep learning approaches have provided stateoftheart performance in many applications by relying on extremely large and heavily overparameterized neural networks. However, such networks have been shown to be very brittle, not generalize well to new uses cases, and are often difficult if not impossible to deploy on resources limited platforms. Model pruning, i.e., reducing the size of the network, is a widely adopted strategy that can lead to more robust and generalizable network usually orders of magnitude smaller with the same or even improved performance. While there exist many heuristics for model pruning, our understanding of the pruning process remains limited. Empirical studies show that some heuristics improve performance while others can make models more brittle or have other side effects. This work aims to shed light on how different pruning methods alter the network's internal feature representation, and the corresponding impact on model performance. To provide a meaningful comparison and characterization of model feature space, we use three geometric metrics that are decomposed from the common adopted classification loss. With these metrics, we design a visualization system to highlight the impact of pruning on model prediction as well as the latent feature embedding. The proposed tool provides an environment for exploring and studying differences among pruning methods and between pruned and original model. By leveraging our visualization, the ML researchers can not only identify samples that are fragile to model pruning and data corruption but also obtain insights and explanations on how some pruned models achieve superior robustness performance.
Random Forest of Epidemiological Models for Influenza Forecasting ; Forecasting the hospitalizations caused by the Influenza virus is vital for public health planning so that hospitals can be better prepared for an influx of patients. Many forecasting methods have been used in realtime during the Influenza seasons and submitted to the CDC for public communication. The forecasting models range from mechanistic models, and autoregression models to machine learning models. We hypothesize that we can improve forecasting by using multiple mechanistic models to produce potential trajectories and use machine learning to learn how to combine those trajectories into an improved forecast. We propose a Tree Ensemble model design that utilizes the individual predictors of our baseline model SIkJalpha to improve its performance. Each predictor is generated by changing a set of hyperparameters. We compare our prospective forecasts deployed for the FluSight challenge 2022 to all the other submitted approaches. Our approach is fully automated and does not require any manual tuning. We demonstrate that our Random Forestbased approach is able to improve upon the forecasts of the individual predictors in terms of mean absolute error, coverage, and weighted interval score. Our method outperforms all other models in terms of the mean absolute error and the weighted interval score based on the mean across all weekly submissions in the current season 2022. Explainability of the Random Forest through analysis of the trees enables us to gain insights into how it improves upon the individual predictors.
Online Modeling and Control of Soft Multifingered Grippers via Koopman Operator Theory ; Soft grippers are gaining momentum across applications due to their flexibility and dexterity. However, the infinitedimensionality and nonlinearity associated with soft robots challenge modeling and closedloop control of soft grippers to perform grasping tasks. To solve this problem, datadriven methods have been proposed. Most datadriven methods rely on intensive model learning in simulation or offline, and as such it may be hard to generalize across different settings not explicitly trained upon and in physical robot testing where online control is required. In this paper, we propose an online modeling and control algorithm that utilizes Koopman operator theory to update an estimated model of the underlying dynamics at each time step in realtime. The learned and continuously updated models are then embedded into an online Model Predictive Control MPC structure and deployed onto soft multifingered robotic grippers. To evaluate the performance, the prediction accuracy of our approach is first compared against other modelextraction methods among different datasets. Next, the online modeling and control algorithm is tested experimentally with a soft 3fingered gripper grasping objects of various shapes and weights unknown to the controller initially. Results indicate a high success ratio in grasping different objects using the proposed method. Sample trials can be viewed at httpsyoutu.bei2hCMX7zSKQ.
Optimized Views Photogrammetry Precision Analysis and A Largescale Case Study in Qingdao ; UAVs have become one of the widely used remote sensing platforms and played a critical role in the construction of smart cities. However, due to the complex environment in urban scenes, secure and accurate data acquisition brings great challenges to 3D modeling and scene updating. Optimal trajectory planning of UAVs and accurate data collection of onboard cameras are nontrivial issues in urban modeling. This study presents the principle of optimized views photogrammetry and verifies its precision and potential in largescale 3D modeling. Different from oblique photogrammetry, optimized views photogrammetry uses rough models to generate and optimize UAV trajectories, which is achieved through the consideration of model point reconstructability and view point redundancy. Based on the principle of optimized views photogrammetry, this study first conducts a precision analysis of 3D models by using UAV images of optimized views photogrammetry and then executes a largescale case study in the urban region of Qingdao city, China, to verify its engineering potential. By using GCPs for image orientation precision analysis and TLS terrestrial laser scanning point clouds for model quality analysis, experimental results show that optimized views photogrammetry could construct stable image connection networks and could achieve comparable image orientation accuracy. Benefiting from the accurate image acquisition strategy, the quality of mesh models significantly improves, especially for urban areas with serious occlusions, in which 3 to 5 times of higher accuracy has been achieved. Besides, the case study in Qingdao city verifies that optimized views photogrammetry can be a reliable and powerful solution for the largescale 3D modeling in complex urban scenes.
Estimation and inference for the Wasserstein distance between mixing measures in topic models ; The Wasserstein distance between mixing measures has come to occupy a central place in the statistical analysis of mixture models. This work proposes a new canonical interpretation of this distance and provides tools to perform inference on the Wasserstein distance between mixing measures in topic models. We consider the general setting of an identifiable mixture model consisting of mixtures of distributions from a set mathcalA equipped with an arbitrary metric d, and show that the Wasserstein distance between mixing measures is uniquely characterized as the most discriminative convex extension of the metric d to the set of mixtures of elements of mathcalA. The Wasserstein distance between mixing measures has been widely used in the study of such models, but without axiomatic justification. Our results establish this metric to be a canonical choice. Specializing our results to topic models, we consider estimation and inference of this distance. Though upper bounds for its estimation have been recently established elsewhere, we prove the first minimax lower bounds for the estimation of the Wasserstein distance in topic models. We also establish fully datadriven inferential tools for the Wasserstein distance in the topic model context. Our results apply to potentially sparse mixtures of highdimensional discrete probability distributions. These results allow us to obtain the first asymptotically valid confidence intervals for the Wasserstein distance in topic models.
A local continuum model of cellcell adhesion ; Cellcell adhesion is one the most fundamental mechanisms regulating collective cell migration during tissue development, homeostasis and repair, allowing cell populations to selforganize and eventually form and maintain complex tissue shapes. Cells interact with each other via the formation of protrusions or filopodia and they adhere to other cells through binding of cell surface proteins. The resulting adhesive forces are then related to cell size and shape and, often, continuum models represent them by nonlocal attractive interactions. In this paper, we present a new continuum model of cellcell adhesion which can be derived from a general nonlocal model in the limit of shortrange interactions. This new model is local, resembling a system of thinfilm type equations, with the various model parameters playing the role of surface tensions between different cell populations. Numerical simulations in one and two dimensions reveal that the local model maintains the diversity of cell sorting patterns observed both in experiments and in previously used nonlocal models. In addition, it also has the advantage of having explicit stationary solutions, which provides a direct link between the model parameters and the differential adhesion hypothesis.
Online Reflective Learning for Robust Medical Image Segmentation ; Deep segmentation models often face the failure risks when the testing image presents unseen distributions. Improving model robustness against these risks is crucial for the largescale clinical application of deep models. In this study, inspired by human learning cycle, we propose a novel online reflective learning framework RefSeg to improve segmentation robustness. Based on the reflectiononaction conception, our RefSeg firstly drives the deep model to take action to obtain semantic segmentation. Then, RefSeg triggers the model to reflect itself. Because making deep models realize their segmentation failures during testing is challenging, RefSeg synthesizes a realistic proxy image from the semantic mask to help deep models build intuitive and effective reflections. This proxy translates and emphasizes the segmentation flaws. By maximizing the structural similarity between the raw input and the proxy, the reflectiononaction loop is closed with segmentation robustness improved. RefSeg runs in the testing phase and is general for segmentation models. Extensive validation on three medical image segmentation tasks with a public cardiac MR dataset and two inhouse large ultrasound datasets show that our RefSeg remarkably improves model robustness and reports stateoftheart performance over strong competitors.
Modeling Randomly Walking Volatility with Chained Gamma Distributions ; Volatility clustering is a common phenomenon in financial time series. Typically, linear models can be used to describe the temporal autocorrelation of the logarithmic variance of returns. Considering the difficulty in estimating this model, we construct a Dynamic Bayesian Network, which utilizes the conjugate prior relation of normalgamma and gammagamma, so that its posterior form locally remains unchanged at each node. This makes it possible to find approximate solutions using variational methods quickly. Furthermore, we ensure that the volatility expressed by the model is an independent incremental process after inserting dummy gamma nodes between adjacent time steps. We have found that this model has two advantages 1 It can be proved that it can express heavier tails than Gaussians, i.e., have positive excess kurtosis, compared to popular linear models. 2 If the variational inferenceVI is used for state estimation, it runs much faster than Monte CarloMC methods since the calculation of the posterior uses only basic arithmetic operations. And its convergence process is deterministic. We tested the model, named GamChain, using recent Crypto, Nasdaq, and Forex records of varying resolutions. The results show that 1 In the same case of using MC, this model can achieve comparable state estimation results with the regular lognormal chain. 2 In the case of only using VI, this model can obtain accuracy that are slightly worse than MC, but still acceptable in practice; 3 Only using VI, the running time of GamChain, in general case, can be reduced to below 5 of that based on the lognormal chain via MC.
Relating the fundamental frequency of speech with EEG using a dilated convolutional network ; To investigate how speech is processed in the brain, we can model the relation between features of a natural speech signal and the corresponding recorded electroencephalogram EEG. Usually, linear models are used in regression tasks. Either EEG is predicted, or speech is reconstructed, and the correlation between predicted and actual signal is used to measure the brain's decoding ability. However, given the nonlinear nature of the brain, the modeling ability of linear models is limited. Recent studies introduced nonlinear models to relate the speech envelope to EEG. We set out to include other features of speech that are not coded in the envelope, notably the fundamental frequency of the voice f0. F0 is a higherfrequency feature primarily coded at the brainstem to midbrain level. We present a dilatedconvolutional model to provide evidence of neural tracking of the f0. We show that a combination of f0 and the speech envelope improves the performance of a stateoftheart envelopebased model. This suggests the dilatedconvolutional model can extract nonredundant information from both f0 and the envelope. We also show the ability of the dilatedconvolutional model to generalize to subjects not included during training. This latter finding will accelerate f0based hearing diagnosis.
Simulations in a Digital Twin of an Electrical Machine ; Digital twins have become popular for their ability to monitor and optimize a process or a machine during its lifetime using simulations and sensor data. In this paper, we focus on the challenge of the implementation of accurate and real time simulations for digital twins in the context of electrical machines. In general, this involves not only computational models for the electromagnetic aspects, but also mechanical and thermal effects need to be taken into account. We address mathematical tools that can be employed to carry out the required simulations based on physical laws as well as surrogate or datadriven models. One of those tools is a model hierarchy of very fine to very course models as well model reduction which is required for obtaining realtime simulations. We discuss in detail the coupling of electromagnetic, mechanical, and thermal models of an electrical machine to obtain a simulation model which is able to describe the interaction of those different physical components. In this context, a very promising setting is provided by energybased formulations within the portHamiltonian framework, which has received much attention in the past years, especially in the context of multiphysics modeling. We present such portHamiltonian formulations for the considered electromagnetic, mechanical, and thermal models as well as for the coupled overall system.
Not All Models Are Equal Predicting Model Transferability in a Selfchallenging Fisher Space ; This paper addresses an important problem of ranking the pretrained deep neural networks and screening the most transferable ones for downstream tasks. It is challenging because the groundtruth model ranking for each task can only be generated by finetuning the pretrained models on the target dataset, which is bruteforce and computationally expensive. Recent advanced methods proposed several lightweight transferability metrics to predict the finetuning results. However, these approaches only capture static representations but neglect the finetuning dynamics. To this end, this paper proposes a new transferability metric, called textbfSelfchallenging textbfFisher textbfDiscriminant textbfAnalysis textbfSFDA, which has many appealing benefits that existing works do not have. First, SFDA can embed the static features into a Fisher space and refine them for better separability between classes. Second, SFDA uses a selfchallenging mechanism to encourage different pretrained models to differentiate on hard examples. Third, SFDA can easily select multiple pretrained models for the model ensemble. Extensive experiments on 33 pretrained models of 11 downstream tasks show that SFDA is efficient, effective, and robust when measuring the transferability of pretrained models. For instance, compared with the stateoftheart method NLEEP, SFDA demonstrates an average of 59.1 gain while bringing 22.5x speedup in wallclock time. The code will be available at urlhttpsgithub.comTencentARCSFDA.
Hidden Schema Networks ; Large, pretrained language models infer powerful representations that encode rich semantic and syntactic content, albeit implicitly. In this work we introduce a novel neural language model that enforces, via inductive biases, explicit relational structures which allow for compositionality onto the output representations of pretrained language models. Specifically, the model encodes sentences into sequences of symbols composed representations, which correspond to the nodes visited by biased random walkers on a global latent graph, and infers the posterior distribution of the latter. We first demonstrate that the model is able to uncover groundtruth graphs from artificially generated datasets of random token sequences. Next, we leverage pretrained BERT and GPT2 language models as encoder and decoder, respectively, to infer networks of symbols schemata from natural language datasets. Our experiments show that i the inferred symbols can be interpreted as encoding different aspects of language, as e.g. topics or sentiments, and that ii GPTlike models can effectively be conditioned on symbolic representations. Finally, we explore training autoregressive, random walk reasoning models on schema networks inferred from commonsense knowledge databases, and using the sampled paths to enhance the performance of pretrained language models on commonsense IfThen reasoning tasks.
Analysis of social interactions in grouphoused animals using dyadic linear models ; Understanding factors affecting social interactions among animals is important for applied animal behavior research. Thus, there is a need to elicit statistical models to analyze data collected from pairwise behavioral interactions. In this study, we propose treating social interaction data as dyadic observations and propose a statistical model for their analysis. We performed posterior predictive checks of the model through different validation strategies stratified 5fold random crossvalidation, blockbysocialgroup crossvalidation, and blockbyfocalanimals validation. The proposed model was applied to a pig behavior dataset collected from 797 growing pigs freshly remixed into 59 social groups that resulted in 10,032 records of directional dyadic interactions. The response variable was the duration in seconds that each animal spent delivering attacks on another group mate. Generalized linear mixed models were fitted. Fixed effects included sex, individual weight, prior nursery mate experience, and prior littermate experience of the two pigs in the dyad. Random effects included aggression giver, aggression receiver, dyad, and social group. A Bayesian framework was utilized for parameter estimation and posterior predictive model checking. Prior nursery mate experience was the only significant fixed effect. In addition, a weak but significant correlation between the random giver effect and the random receiver effect was obtained when analyzing the attacking duration. The predictive performance of the model varied depending on the validation strategy, with substantially lower performance from the blockbysocialgroup strategy than other validation strategies. Collectively, this paper demonstrates a statistical model to analyze interactive animal behaviors, particularly dyadic interactions.
Effectiveness of French Language Models on Abstractive Dialogue Summarization Task ; Pretrained language models have established the stateoftheart on various natural language processing tasks, including dialogue summarization, which allows the reader to quickly access key information from long conversations in meetings, interviews or phone calls. However, such dialogues are still difficult to handle with current models because the spontaneity of the language involves expressions that are rarely present in the corpora used for pretraining the language models. Moreover, the vast majority of the work accomplished in this field has been focused on English. In this work, we present a study on the summarization of spontaneous oral dialogues in French using several language specific pretrained models BARThez, and BelGPT2, as well as multilingual pretrained models mBART, mBARThez, and mT5. Experiments were performed on the DECODA Call Center dialogue corpus whose task is to generate abstractive synopses from call center conversations between a caller and one or several agents depending on the situation. Results show that the BARThez models offer the best performance far above the previous stateoftheart on DECODA. We further discuss the limits of such pretrained models and the challenges that must be addressed for summarizing spontaneous dialogues.
The multicolour East model ; We consider the multicolour East model, a model of glass forming liquids closely related to the East model on mathbbZd. The state space Gcup starmathbbZd consists of Gle 2d different vacancy types and the neutral state star. To each hin G we associate unique facilitation mechanisms cxhxin mathbbZd that correspond to rotated versions of the East model constraints. If cxh is satisfied, the state on x can transition from h to star with rate pin 0,1 or vice versa with rate qhin 0,1, where generally qhneq qh' if h'neq h. Notably, vertices in the state h cannot transition directly to h'neq h and neighbouring h'vacancies do not contribute in satisfying cxh. Thus, there is a novel blocking mechanism between vacancies of differing type. We find sufficient conditions on the model geometry to have a positive spectral gap and prove that with G2d the model is not ergodic. For d2 we prove that the model with Gle 3 has positive spectral gap and we find sufficient conditions on the transition rates for the spectral gap to be given in the leading order by the spectral gap of the East model on mathbbZ2 with parameter qminminhin Gqh in the limit qminrightarrow 0. In particular, we prove this when there are hin G with qhgg qmin by explicitly constructing mechanisms on which the frequent vacancy types cooperate to facilitate the East movement of the least frequent vacancies.
The tradeoffs of model size in large recommendation models A 10000 times compressed criteotb DLRM model 100 GB parameters to mere 10MB ; Embedding tables dominate industrialscale recommendation model sizes, using up to terabytes of memory. A popular and the largest publicly available machine learning MLPerf benchmark on recommendation data is a Deep Learning Recommendation Model DLRM trained on a terabyte of clickthrough data. It contains 100GB of embedding memory 25Billion parameters. DLRMs, due to their sheer size and the associated volume of data, face difficulty in training, deploying for inference, and memory bottlenecks due to large embedding tables. This paper analyzes and extensively evaluates a generic parameter sharing setup PSS for compressing DLRM models. We show theoretical upper bounds on the learnable memory requirements for achieving 1 pm epsilon approximations to the embedding table. Our bounds indicate exponentially fewer parameters suffice for good accuracy. To this end, we demonstrate a PSS DLRM reaching 10000times compression on criteotb without losing quality. Such a compression, however, comes with a caveat. It requires 4.5 times more iterations to reach the same saturation quality. The paper argues that this tradeoff needs more investigations as it might be significantly favorable. Leveraging the small size of the compressed model, we show a 4.3times improvement in training latency leading to similar overall training times. Thus, in the tradeoff between system advantage of a small DLRM model vs. slower convergence, we show that scales are tipped towards having a smaller DLRM model, leading to faster inference, easier deployment, and similar training times.
Conceptual Modeling of Objects ; In this paper, we concentrate on objectrelated analysis in the field of general ontology of reality as related to software engineering e.g., UML classes. Such a venture is similar to many studies in which researchers have enhanced modeling through ontological analysis of the underlying paradigm of UML models. We attempt to develop a conceptual model that consists of a foundation of things that is supplemented with a second level of designated objects. According to some researchers, the problem of the difference between things and objects is one of the most decisive issues for the conception of reality. In software engineering, objects serve two purposes they promote understanding of the real world and provide a practical basis for computer implementation. The notion of object plays a central role in the objectoriented approach, in which other notions are viewed by decomposing them into objects and their relationships. This paper contributes to the establishment of a broader understanding of the notion of object in conceptual modeling based on things that are simultaneously machines. In this study, we explored the underlying hypothesis of conceptual models e.g., UML to enhance their ontological analysis by using the thingmachine TM model, which presents the domain as thimacs. Following the philosophical distinction between things and objects, we can specify modeling at two levels the thinging stage and the objectification stage. Objects are thimacs that control the handleablity of their subparts when interacting with the outside of the object analogous to the body parts holding together in an assemblage when interacting with the outside. The results promise a more refined modeling process to develop a highlevel description of the involved domain.
Compound Density Networks for Risk Prediction using Electronic Health Records ; Electronic Health Records EHRs exhibit a high amount of missing data due to variations of patient conditions and treatment needs. Imputation of missing values has been considered an effective approach to deal with this challenge. Existing work separates imputation method and prediction model as two independent parts of an EHRbased machine learning system. We propose an integrated endtoend approach by utilizing a Compound Density Network CDNet that allows the imputation method and prediction model to be tuned together within a single framework. CDNet consists of a Gated recurrent unit GRU, a Mixture Density Network MDN, and a Regularized Attention Network RAN. The GRU is used as a latent variable model to model EHR data. The MDN is designed to sample latent variables generated by GRU. The RAN serves as a regularizer for less reliable imputed values. The architecture of CDNet enables GRU and MDN to iteratively leverage the output of each other to impute missing values, leading to a more accurate and robust prediction. We validate CDNet on the mortality prediction task on the MIMICIII dataset. Our model outperforms stateoftheart models by significant margins. We also empirically show that regularizing imputed values is a key factor for superior prediction performance. Analysis of prediction uncertainty shows that our model can capture both aleatoric and epistemic uncertainties, which offers model users a better understanding of the model results.
Dark Matter and Neutrino Masses in a Portalinolike Model ; We explore a Portalinolike model of dark matter and neutrino masses in which righthanded neutrino fields connect gauge neutral operators from the Standard Model and Hidden Sector. Neutrino masses are generated via a seesawlike mechanism that can explain the light active neutrino masses. The model includes a Portalino'' state that connects the two sectors via the neutrino portal. Dark Matter in this model consists of a hidden sector Dirac fermion that dominantly freezesout via resonant annihilations into other hidden sector states, which ultimately results in a population of Portalinos. Due to small mixing in the extended neutrino sector these Portalinos tend to be cosmologically long lived, decaying into Standard Model particles leading to constraints on the model from Big Bang Nucleosynthesis and measurements of the Cosmic Microwave Background radiation. Combining these limits with direct constraints on the size of the Portalinoneutrino mixing and the assumptions of the model the viable mass ranges for the Portalino states are found to be 0.02 eV lesssim mn lesssim 6.4 eV or 489 MeV lesssim mn lesssim TeV. Indirect dark matter signals in the form of highly boosted, monoenergetic Portalinos produced in Dark Matter annihilations provide a target for neutrino telescopes.
Cohort comfort models Using occupants' similarity to predict personal thermal preference with less data ; We introduce Cohort Comfort Models, a new framework for predicting how new occupants would perceive their thermal environment. Cohort Comfort Models leverage historical data collected from a sample population, who have some underlying preference similarity, to predict thermal preference responses of new occupants. Our framework is capable of exploiting available background information such as physical characteristics and onetime onboarding surveys satisfaction with life scale, highly sensitive person scale, the Big Five personality traits from the new occupant as well as physiological and environmental sensor measurements paired with thermal preference responses. We implemented our framework in two publicly available datasets containing longitudinal data from 55 people, comprising more than 6,000 individual thermal comfort surveys. We observed that, a Cohort Comfort Model that uses background information provided very little change in thermal preference prediction performance but uses none historical data. On the other hand, for half and one third of each dataset occupant population, using Cohort Comfort Models, with less historical data from target occupants, Cohort Comfort Models increased their thermal preference prediction by 8 and 5 on average, and up to 36 and 46 for some occupants, when compared to generalpurpose models trained on the whole population of occupants. The framework is presented in a data and site agnostic manner, with its different components easily tailored to the data availability of the occupants and the buildings. Cohort Comfort Models can be an important step towards personalization without the need of developing a personalized model for each new occupant.
Robust Scenario Interpretation from Multimodel Prediction Efforts ; Multimodel prediction efforts in infectious disease modeling and climate modeling involve multiple teams independently producing projections under various scenarios. Often these scenarios are produced by the presence and absence of a decision in the future, e.g., no vaccinations scenario A vs vaccinations scenario B available in the future. The models submit probabilistic projections for each of the scenarios. Obtaining a confidence interval on the impact of the decision e.g., number of deaths averted is important for decision making. However, obtaining tight bounds only from the probabilistic projections for the individual scenarios is difficult, as the joint probability is not known. Further, the models may not be able to generate the joint probability distribution due to various reasons including the need to rewrite simulations, and storage and transfer requirements. Without asking the submitting models for additional work, we aim to estimate a nontrivial bound on the outcomes due to the decision variable. We first prove, under a key assumption, that an alphaconfidence interval on the difference of scenario predictions can be obtained given only the quantiles of the predictions. Then we show how to estimate a confidence interval after relaxing that assumption. We use our approach to estimate confidence intervals on reduction in cases, deaths, and hospitalizations due to vaccinations based on model submissions to the US Scenario Modeling Hub.
Mappings for Marginal Probabilities with Applications to Models in Statistical Physics ; We present local mappings that relate the marginal probabilities of a global probability mass function represented by its primal normal factor graph to the corresponding marginal probabilities in its dual normal factor graph. The mapping is based on the Fourier transform of the local factors of the models. Details of the mapping are provided for the Ising model, where it is proved that the local extrema of the fixed points are attained at the phase transition of the twodimensional nearestneighbor Ising model. The results are further extended to the Potts model, to the clock model, and to Gaussian Markov random fields. By employing the mapping, we can transform simultaneously all the estimated marginal probabilities from the dual domain to the primal domain and vice versa, which is advantageous if estimating the marginals can be carried out more efficiently in the dual domain. An example of particular significance is the ferromagnetic Ising model in a positive external magnetic field. For this model, there exists a rapidly mixing Markov chain called the subgraphsworld process to generate configurations in the dual normal factor graph of the model. Our numerical experiments illustrate that the proposed procedure can provide more accurate estimates of marginal probabilities of a global probability mass function in various settings.
CodeBERTnt code naturalness via CodeBERT ; Much of softwareengineering research relies on the naturalness of code, the fact that code, in small code snippets, is repetitive and can be predicted using statistical language models like ngram. Although powerful, training such models on large code corpus is tedious, timeconsuming and sensitive to code patterns and practices encountered during training. Consequently, these models are often trained on a small corpora and estimate the language naturalness that is relative to a specific style of programming or type of project. To overcome these issues, we propose using pretrained language models to infer code naturalness. Pretrained models are often built on big data, are easy to use in an outofthebox way and include powerful learning associations mechanisms. Our key idea is to quantify code naturalness through its predictability, by using stateoftheart generative pretrained language models. To this end, we infer naturalness by masking omitting code tokens, one at a time, of codesequences, and checking the models' ability to predict them. To this end, we evaluate three different predictability metrics; a measuring the number of exact matches of the predictions, b computing the embedding similarity between the original and predicted code, i.e., similarity at the vector space, and c computing the confidence of the model when doing the token completion task irrespective of the outcome. We implement this workflow, named CodeBERTnt, and evaluate its capability to prioritize buggy lines over nonbuggy ones when ranking code based on its naturalness. Our results, on 2510 buggy versions of 40 projects from the SmartShark dataset, show that CodeBERTnt outperforms both, randomuniform and complexitybased ranking techniques, and yields comparable results slightly better than the ngram models.
ZCode A Pretrained Language Model Optimized for Abstractive Summarization ; This paper presents ZCode, a new pretrained language model optimized for abstractive text summarization. The model extends the state of the art encoderdecoder model using three techniques. First, we use a twophase pretraining process to improve model's performance on lowresource summarization tasks. The model is first pretrained using text corpora for language understanding, and then is continually pretrained on summarization corpora for grounded text generation. Second, we replace selfattention layers in the encoder with disentangled attention layers, where each word is represented using two vectors that encode its content and position, respectively. Third, we use fusioninencoder, a simple yet effective method of encoding long sequences in a hierarchical manner. ZCode creates new state of the art on 9 out of 13 text summarization tasks across 5 languages. Our model is parameterefficient in that it outperforms the 600x larger PaLM540B on XSum, and the finetuned 200x larger GPT3175B on SAMSum. In zeroshot and fewshot settings, our model substantially outperforms the competing models.
Latent Variable Models in the Era of Industrial Big Data Extension and Beyond ; A rich supply of data and innovative algorithms have made datadriven modeling a popular technique in modern industry. Among various datadriven methods, latent variable models LVMs and their counterparts account for a major share and play a vital role in many industrial modeling areas. LVM can be generally divided into statistical learningbased classic LVM and neural networksbased deep LVM DLVM. We first discuss the definitions, theories and applications of classic LVMs in detail, which serves as both a comprehensive tutorial and a brief application survey on classic LVMs. Then we present a thorough introduction to current mainstream DLVMs with emphasis on their theories and model architectures, soon afterwards provide a detailed survey on industrial applications of DLVMs. The aforementioned two types of LVM have obvious advantages and disadvantages. Specifically, classic LVMs have concise principles and good interpretability, but their model capacity cannot address complicated tasks. Neural networksbased DLVMs have sufficient model capacity to achieve satisfactory performance in complex scenarios, but it comes at sacrifices in model interpretability and efficiency. Aiming at combining the virtues and mitigating the drawbacks of these two types of LVMs, as well as exploring nonneuralnetwork manners to build deep models, we propose a novel concept called lightweight deep LVM LDLVM. After proposing this new idea, the article first elaborates the motivation and connotation of LDLVM, then provides two novel LDLVMs, along with thorough descriptions on their principles, architectures and merits. Finally, outlooks and opportunities are discussed, including important open questions and possible research directions.
Percolation in binary mixtures of linkers and particles chaining it vs branching ; Equilibrium gels of colloidal particles can be realized through the introduction of a second species, a linker that mediates the bonds between the colloids. A gel forming binary mixture whose linkers can selfassemble into linear chains while still promoting the aggregation of particles is considered in this work. The particles are patchy particles with fC patches of type C and the linkers are patchy particles with 2 patches of type A and fB patches of type B. The bonds between patches of type A AA bonds promote the formation of linear chains of linkers. Two different ways model A and model B of bonding the linkers to the particles or inducing branching are studied. In model A, there is a competition between chaining and branching, since the bonding between linkers and particles is done through AC bonds only. In model B linkers aggregate to particles through bonds BC only, making chaining and branching independent. The percolation behaviour of these two models is studied in detail, employing a generalized FloryStockmayer theory and Monte Carlo simulations. The selfassembly of linkers into chains reduces the fraction of particles needed for percolation to occur models A and B and induces percolation when the fraction of particles is high model B. Percolation by heating and percolation loops in temperature composition diagrams are obtained when the formation of chains is energetically favourable, by increasing the entropic gain of branching model A. Chaining and branching are found to follow a model dependent relation at percolation, which shows that, for the same composition, longer chains require less branching for percolation to occur.
Scalar Weak Gravity Conjecture in Super YangMills Inflationary Model ; In this article, we want to check four inflation models, such as composite NJL inflation NJLI, Glueball inflationGI, super YangMills inflation SYMI, and Orientifold inflation OI, with two conjectures of the swampland program scalar weak gravity conjecture SWGC and strong scalar weak gravity conjecture SSWGC since all these models violate the dS swampland conjectureDSC but are compatible with further refining de Sitter swampland conjecture FRDSSC through manual adjustment of free parameters of the mentioned conjecture. We want to study the simultaneous compatibility of each model with these two new conjectures. Despite being consistent with FRDSSC, we find that all models are not compatible with the other conjectures of the Swampland program in all regions, and these conjectures are only satisfied in a specific area. Also, due to the presence of constant parameter phi0 in the higher orders derivatives, the SYMI and OI among all the models are more compatible with all conjectures of the swampland program. These models can provide a more significant amount of satisfaction with all of them. They can be suitable and accurate inflation models for a more profound examination of universe developments. We determined a particular region for these models is compatible with FRDSSC, SWGC, and SSWGC simultaneously
STDEN Towards PhysicsGuided Neural Networks for Traffic Flow Prediction ; Highperformance traffic flow prediction model designing, a core technology of Intelligent Transportation System, is a longstanding but still challenging task for industrial and academic communities. The lack of integration between physical principles and datadriven models is an important reason for limiting the development of this field. In the literature, physicsbased methods can usually provide a clear interpretation of the dynamic process of traffic flow systems but are with limited accuracy, while datadriven methods, especially deep learning with blackbox structures, can achieve improved performance but can not be fully trusted due to lack of a reasonable physical basis. To bridge the gap between purely datadriven and physicsdriven approaches, we propose a physicsguided deep learning model named SpatioTemporal Differential Equation Network STDEN, which casts the physical mechanism of traffic flow dynamics into a deep neural network framework. Specifically, we assume the traffic flow on road networks is driven by a latent potential energy field like water flows are driven by the gravity field, and model the spatiotemporal dynamic process of the potential energy field as a differential equation network. STDEN absorbs both the performance advantage of datadriven models and the interpretability of physicsbased models, so is named a physicsguided prediction model. Experiments on three realworld traffic datasets in Beijing show that our model outperforms stateoftheart baselines by a significant margin. A case study further verifies that STDEN can capture the mechanism of urban traffic and generate accurate predictions with physical meaning. The proposed framework of differential equation network modeling may also cast light on other similar applications.
An Interpretable and Efficient InfiniteOrder Vector Autoregressive Model for HighDimensional Time Series ; As a special infiniteorder vector autoregressive VAR model, the vector autoregressive moving average VARMA model can capture much richer temporal patterns than the widely used finiteorder VAR model. However, its practicality has long been hindered by its nonidentifiability, computational intractability, and relative difficulty of interpretation. This paper introduces a novel infiniteorder VAR model which, with only a little sacrifice of generality, inherits the essential temporal patterns of the VARMA model but avoids all of the above drawbacks. As another attractive feature, the temporal and crosssectional dependence structures of this model can be interpreted separately, since they are characterized by different sets of parameters. For highdimensional time series, this separation motivates us to impose sparsity on the parameters determining the crosssectional dependence. As a result, greater statistical efficiency and interpretability can be achieved, while no loss of temporal information is incurred by the imposed sparsity. We introduce an ell1regularized estimator for the proposed model and derive the corresponding nonasymptotic error bounds. An efficient block coordinate descent algorithm and a consistent model order selection method are developed. The merit of the proposed approach is supported by simulation studies and a realworld macroeconomic data analysis.
Learning Differential Operators for Interpretable Time Series Modeling ; Modeling sequential patterns from data is at the core of various time series forecasting tasks. Deep learning models have greatly outperformed many traditional models, but these blackbox models generally lack explainability in prediction and decision making. To reveal the underlying trend with understandable mathematical expressions, scientists and economists tend to use partial differential equations PDEs to explain the highly nonlinear dynamics of sequential patterns. However, it usually requires domain expert knowledge and a series of simplified assumptions, which is not always practical and can deviate from the everchanging world. Is it possible to learn the differential relations from data dynamically to explain the timeevolving dynamics In this work, we propose an learning framework that can automatically obtain interpretable PDE models from sequential data. Particularly, this framework is comprised of learnable differential blocks, named Pblocks, which is proved to be able to approximate any timeevolving complex continuous functions in theory. Moreover, to capture the dynamics shift, this framework introduces a metalearning controller to dynamically optimize the hyperparameters of a hybrid PDE model. Extensive experiments on times series forecasting of financial, engineering, and health data show that our model can provide valuable interpretability and achieve comparable performance to stateoftheart models. From empirical studies, we find that learning a few differential operators may capture the major trend of sequential dynamics without massive computational complexity.
Modelling Power Consumptions for Multirotor UAVs ; Unmanned aerial vehicles UAVs have various advantages, but their practical applications are influenced by their limited energy. Therefore, it is important to manage their power consumption and also important to establish corresponding power consumption models. However, most of existing works either establish theoretical power consumption models for fixedwing UAVs and singlerotor UAVs, or provide heuristic power consumption models for multirotor UAVs without rigorous mathematical derivations. This paper aims to establish theoretical power consumption models for multirotor UAVs. To be specific, the closedform power consumption models for a multirotor UAV in three flight statuses, i.e., forward flight, vertical ascent and vertical descent, are derived by leveraging the relationship between singlerotor UAVs and multirotor UAVs in terms of power consumptions. On this basis, a generic flight power consumption model for the UAV in a threedimensional 3D scenario is obtained. Extensive experiments are conducted by using DJI M210 and a mobile app made by DJI Mobile SDK in real scenarios, and confirm the correctness and effectiveness of these models; in addition, simulations are performed to further investigate the effect of the rotor numbers on the power consumption for the UAV. The proposed power consumption models not only reveal how the power consumption of multirotor UAVs are affected by various factors, but also pave the way for introducing other novel applications.
An Investigation of Smart Contract for Collaborative Machine Learning Model Training ; Machine learning ML has penetrated various fields in the era of big data. The advantage of collaborative machine learning CML over most conventional ML lies in the joint effort of decentralized nodes or agents that results in better model performance and generalization. As the training of ML models requires a massive amount of good quality data, it is necessary to eliminate concerns about data privacy and ensure highquality data. To solve this problem, we cast our eyes on the integration of CML and smart contracts. Based on blockchain, smart contracts enable automatic execution of data preserving and validation, as well as the continuity of CML model training. In our simulation experiments, we define incentive mechanisms on the smart contract, investigate the important factors such as the number of features in the dataset numwords, the size of the training data, the cost for the data holders to submit data, etc., and conclude how these factors impact the performance metrics of the model the accuracy of the trained model, the gap between the accuracies of the model before and after simulation, and the time to use up the balance of bad agent. For instance, the increase of the value of numwords leads to higher model accuracy and eliminates the negative influence of malicious agents in a shorter time from our observation of the experiment results. Statistical analyses show that with the help of smart contracts, the influence of invalid data is efficiently diminished and model robustness is maintained. We also discuss the gap in existing research and put forward possible future directions for further works.
Sparse deep neural networks for modeling aluminum electrolysis dynamics ; Deep neural networks have become very popular in modeling complex nonlinear processes due to their extraordinary ability to fit arbitrary nonlinear functions from data with minimal expert intervention. However, they are almost always overparameterized and challenging to interpret due to their internal complexity. Furthermore, the optimization process to find the learned model parameters can be unstable due to the process getting stuck in local minima. In this work, we demonstrate the value of sparse regularization techniques to significantly reduce the model complexity. We demonstrate this for the case of an aluminium extraction process, which is highly nonlinear system with many interrelated subprocesses. We trained a densely connected deep neural network to model the process and then compared the effects of sparsity promoting l1 regularization on generalizability, interpretability, and training stability. We found that the regularization significantly reduces model complexity compared to a corresponding dense neural network. We argue that this makes the model more interpretable, and show that training an ensemble of sparse neural networks with different parameter initializations often converges to similar model structures with similar learned input features. Furthermore, the empirical study shows that the resulting sparse models generalize better from small training sets than their dense counterparts.
RMExplorer A Visual Analytics Approach to Explore the Performance and the Fairness of Disease Risk Models on Population Subgroups ; Disease risk models can identify highrisk patients and help clinicians provide more personalized care. However, risk models developed on one dataset may not generalize across diverse subpopulations of patients in different datasets and may have unexpected performance. It is challenging for clinical researchers to inspect risk models across different subgroups without any tools. Therefore, we developed an interactive visualization system called RMExplorer Risk Model Explorer to enable interactive risk model assessment. Specifically, the system allows users to define subgroups of patients by selecting clinical, demographic, or other characteristics, to explore the performance and fairness of risk models on the subgroups, and to understand the feature contributions to risk scores. To demonstrate the usefulness of the tool, we conduct a case study, where we use RMExplorer to explore three atrial fibrillation risk models by applying them to the UK Biobank dataset of 445,329 individuals. RMExplorer can help researchers to evaluate the performance and biases of risk models on subpopulations of interest in their data.
The Impact of Model Transformation Language Features on Quality Properties of MTLs A Study Protocol ; Background Dedicated model transformation languages are claimed to provide many benefits over the use of general purpose languages for developing model transformations. However, the actual advantages and disadvantages associated with the use of model transformation languages are poorly understood empirically. There is little knowledge over what advantages and disadvantages hold in which cases and where they originate from. In a prior interview study, we elicited expert opinions on what advantages result from what factors surrounding model transformation languages as well as a number of moderating factors that moderate the influence. Objective We aim to quantitatively asses the interview results to confirm or reject the influences and moderation effects posed by different factors and to gain insights into how valuable different factors are to the discussion. Method We gather data on the factors and quality attributes using an online survey. To analyse the data and examine the hypothesised influences and moderations we use universal structure modelling based on a structural equation model. Universal structure modelling will produce significance values and path coefficients for each hypothesised and modelled interdependence between factors and quality attributes that can be used to confirm or reject correlation and to weigh the strength of influence present. Limitations Due to the complexity and abstractness of the concepts under investigation, a measurement via reflective or formative indicators is not possible. Instead participants are queried about their assessment of concepts through a single item question. We further assume that positive and negative effects of a feature are more prominent if the feature is used more frequently.
ContextAware Query Rewriting for Improving Users' Search Experience on Ecommerce Websites ; Ecommerce queries are often short and ambiguous. Consequently, query understanding often uses query rewriting to disambiguate userinput queries. While using ecommerce search tools, users tend to enter multiple searches, which we call context, before purchasing. These history searches contain contextual insights about users' true shopping intents. Therefore, modeling such contextual information is critical to a better query rewriting model. However, existing query rewriting models ignore users' history behaviors and consider only the instant search query, which is often a short string offering limited information about the true shopping intent. We propose an endtoend contextaware query rewriting model to bridge this gap, which takes the search context into account. Specifically, our model builds a session graph using the history search queries and their contained words. We then employ a graph attention mechanism that models crossquery relations and computes contextual information of the session. The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network. The session representations are then decoded to generate rewritten queries. Empirically, we demonstrate the superiority of our method to stateoftheart approaches under various metrics. On inhouse data from an online shopping platform, by introducing contextual information, our model achieves 11.6 improvement under the MRR Mean Reciprocal Rank metric and 20.1 improvement under the HIT16 metric a hit rate metric, in comparison with the best baseline method Transformerbased model.
Relaxed Attention for Transformer Models ; The powerful modeling capabilities of allattentionbased transformer architectures often cause overfitting and for natural language processing tasks lead to an implicitly learned internal language model in the autoregressive transformer decoder complicating the integration of external language models. In this paper, we explore relaxed attention, a simple and easytoimplement smoothing of the attention weights, yielding a twofold improvement to the general transformer architecture First, relaxed attention provides regularization when applied to the selfattention layers in the encoder. Second, we show that it naturally supports the integration of an external language model as it suppresses the implicitly learned internal language model by relaxing the cross attention in the decoder. We demonstrate the benefit of relaxed attention across several tasks with clear improvement in combination with recent benchmark approaches. Specifically, we exceed the former stateoftheart performance of 26.90 word error rate on the largest public lipreading LRS3 benchmark with a word error rate of 26.31, as well as we achieve a topperforming BLEU score of 37.67 on the IWSLT14 DErightarrowEN machine translation task without external language models and virtually no additional model parameters. Code and models will be made publicly available.
The FitnessCorrected Block Model, or how to create maximumentropy datadriven spatial social networks ; Models of networks play a major role in explaining and reproducing empirically observed patterns. Suitable models can be used to randomize an observed network while preserving some of its features, or to generate synthetic graphs whose properties may be tuned upon the characteristics of a given population. In the present paper, we introduce the FitnessCorrected Block Model, an adjustabledensity variation of the wellknown DegreeCorrected Block Model, and we show that the proposed construction yields a maximum entropy model. When the network is sparse, we derive an analytical expression for the degree distribution of the model that depends on just the constraints and the chosen fitnessdistribution. Our model is perfectly suited to define maximumentropy datadriven spatial social networks, where each block identifies vertices having similar position e.g., residence and age, and where the expected blocktoblock adjacency matrix can be inferred from the available data. In this case, the sparseregime approximation coincides with a phenomenological model where the probability of a link binding two individuals is directly proportional to their sociability and to the typical cohesion of their agegroups, whereas it decays as an inversepower of their geographic distance. We support our analytical findings through simulations of a stylized urban area.
T3VIP Transformationbased 3D Video Prediction ; For autonomous skill acquisition, robots have to learn about the physical rules governing the 3D world dynamics from their own past experience to predict and reason about plausible future outcomes. To this end, we propose a transformationbased 3D video prediction T3VIP approach that explicitly models the 3D motion by decomposing a scene into its object parts and predicting their corresponding rigid transformations. Our model is fully unsupervised, captures the stochastic nature of the real world, and the observational cues in image and point cloud domains constitute its learning signals. To fully leverage all the 2D and 3D observational signals, we equip our model with automatic hyperparameter optimization HPO to interpret the best way of learning from them. To the best of our knowledge, our model is the first generative model that provides an RGBD video prediction of the future for a static camera. Our extensive evaluation with simulated and realworld datasets demonstrates that our formulation leads to interpretable 3D models that predict future depth videos while achieving onpar performance with 2D models on RGB video prediction. Moreover, we demonstrate that our model outperforms 2D baselines on visuomotor control. Videos, code, dataset, and pretrained models are available at httpt3vip.cs.unifreiburg.de.
How unique are pulsar wind nebulae models Implementation of a multiparameter, automatic fitting for timedependent spectra ; Due to the computational cost of calculating a great number of variations of the parameters, detailed radiative models of pulsar wind nebulae PWNe do not usually contain fitting algorithms. As a consequence, most of the models in the literature are, in fact, qualitative fits based on visual inspection. This is particularly true when complex, timedependent models are considered. Motivated by improvements in the computational efficiency of the current PWN models that were obtained in the last years, we here explore the inclusion of automatic fitting algorithms into a fully timedependent model. Incorporating an efficient fitting tool based on the NelderMead algorithm, we blindly find fitting solutions for the Crab nebula and 3C 58 with a timedependent radiation model to compute the spectral and dynamical evolution of young and middleaged PWNe. This inclusion allows us, in addition of more faithfully determining the quality of the fit, to tackle whether there exist degeneracy in the selected PWNe models. We find both for Crab and 3C58, that the fits are well determined, and that no other significantly different set of model parameters is able to cope with experimental data equally well. The code is also able to consider the system's age as a free parameter, recursively determining all other needed magnitudes depending on age accordingly. We use this feature to consider whether a detailed multifrequency spectra can constrain the nebula age, finding that in fact this is the case for the two PWNe studied.
Featurebased model selection for object detection from point cloud data ; Smart monitoring using threedimensional 3D image sensors has been attracting attention in the context of smart cities. In smart monitoring, object detection from point cloud data acquired by 3D image sensors is implemented for detecting moving objects such as vehicles and pedestrians to ensure safety on the road. However, the features of point cloud data are diversified due to the characteristics of light detection and ranging LIDAR units used as 3D image sensors or the install position of the 3D image sensors. Although a variety of deep learning DL models for object detection from point cloud data have been studied to date, no research has considered how to use multiple DL models in accordance with the features of the point cloud data. In this work, we propose a featurebased model selection framework that creates various DL models by using multiple DL methods and by utilizing training data with pseudo incompleteness generated by two artificial techniques sampling and noise adding. It selects the most suitable DL model for the object detection task in accordance with the features of the point cloud data acquired in the real environment. To demonstrate the effectiveness of the proposed framework, we compare the performance of multiple DL models using benchmark datasets created from the KITTI dataset and present example results of object detection obtained through a real outdoor experiment. Depending on the situation, the detection accuracy varies up to 32 between DL models, which confirms the importance of selecting an appropriate DL model according to the situation.
EndtoEnd Lyrics Recognition with Selfsupervised Learning ; Lyrics recognition is an important task in music processing. Despite traditional algorithms such as the hybrid HMM TDNN model achieving good performance, studies on applying endtoend models and selfsupervised learning SSL are limited. In this paper, we first establish an endtoend baseline for lyrics recognition and then explore the performance of SSL models on lyrics recognition task. We evaluate a variety of upstream SSL models with different training methods masked reconstruction, masked prediction, autoregressive reconstruction, and contrastive learning. Our endtoend selfsupervised models, evaluated on the DAMP music dataset, outperform the previous stateoftheart SOTA system by 5.23 for the dev set and 2.4 for the test set even without a language model trained by a large corpus. Moreover, we investigate the effect of background music on the performance of selfsupervised learning models and conclude that the SSL models cannot extract features efficiently in the presence of background music. Finally, we study the outofdomain generalization ability of the SSL features considering that those models were not trained on music datasets.
OSDP Optimal Sharded Data Parallel for Distributed Deep Learning ; Largescale deep learning models contribute to significant performance improvements on varieties of downstream tasks. Current data and model parallelism approaches utilize model replication and partition techniques to support the distributed training of ultralarge models. However, directly deploying these systems often leads to suboptimal training efficiency due to the complex model architectures and the strict device memory constraints. In this paper, we propose Optimal Sharded Data Parallel OSDP, an automated parallel training system that combines the advantages from both data and model parallelism. Given the model description and the device information, OSDP makes tradeoffs between the memory consumption and the hardware utilization, thus automatically generates the distributed computation graph and maximizes the overall system throughput. In addition, OSDP introduces operator splitting to further alleviate peak memory footprints during training with negligible overheads, which enables the trainability of larger models as well as the higher throughput. Extensive experimental results of OSDP on multiple different kinds of largescale models demonstrate that the proposed strategy outperforms the stateoftheart in multiple regards. Our code is available at httpsgithub.comYouheJiangOptimalShardedDataParallel.
Greybox XAI a NeuralSymbolic learning framework to produce interpretable predictions for image classification ; Although Deep Neural Networks DNNs have great generalization and prediction capabilities, their functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are increasingly used to make important predictions in critical environments, and the danger is that they make and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence XAI methods that separate explanations from machine learning models have emerged, but have shortcomings in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement on the importance of endowing Deep Learning models with explanatory capabilities so that they can themselves provide an answer to why a particular prediction was made. First, we address the problem of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic Knowledge Base KB. We extract a KB from the dataset and use it to train a transparent model i.e., a logistic regression. An encoderdecoder architecture is trained on RGB images to produce an output similar to the KB used by the transparent model. Once the two models are trained independently, they are used compositionally to form an explainable predictive model. We show how this new architecture is accurate and explainable in several datasets.
Early stages of polycrystalline diamond deposition Laser reflectance at substrates with growing nanodiamonds ; The chemical vapor deposition of polycrystalline diamond PCD films is typically done on substrates seeded with diamond nanoparticles. Specular laser reflectance and a continuous film model have been used to monitor the thickness of these films during their deposition. However, most seeds are isolated during the early stages of the deposition, which questions the utility of applying such a continuous film model for monitoring deposition before film formation. In this work, we present a model based on the Rayleigh theory of scattering for laser reflectance at substrates with growing nanodiamonds to capture the early stages of PCD deposition. The reflectance behavior predicted by our model differs from that of a continuous film, which is welldescribed by the continuous film model. This difference enlarges as the seed density used in our model decreases. We verify this trend experimentally by depositing diamond under identical conditions on substrates with various seed densities. A relation derived from our model is used to fit reflectance data from which seed densities are obtained that are proportional to those found with electron microscopy. We also show that relying on the continuous film model for describing the early stages of deposition can result in falsely deducing the existence of incubation, and that the continuous film model can be used safely beyond the early stages of deposition. Based on these findings, we delineate a robust method for obtaining growth rates and incubation periods from reflectance measurements. This work may also advance the general understanding of nanoparticle growth and formation.
Shielding Federated Learning Mitigating Byzantine Attacks with Less Constraints ; Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious participants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants' local data, minority attackers, IID data distribution, etc. To relax those constraints, this paper presents RobustFL, the first predictionbased Byzantinerobust federated learning scheme where none of the assumptions is leveraged. The core idea of the RobustFL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously i independence of participants' local data, ii tolerance of majority attackers, iii generalization to variable Byzantine model.
Maximum Entropy Approach for the Prediction of Urban Mobility Patterns ; The science of cities is a relatively new and interdisciplinary topic. It borrows techniques from agentbased modeling, stochastic processes, and partial differential equations. However, how the cities rise and fall, how they evolve, and the mechanisms responsible for these phenomena are still open questions. Scientists have only recently started to develop forecasting tools, despite their importance in urban planning, transportation planning, and epidemic spreading modeling. Here, we build a fully interpretable statistical model that, incorporating only the minimum number of constraints, can predict different phenomena arising in the city. Using data on the movements of carsharing vehicles in different Italian cities, we infer a model using the Maximum Entropy MaxEnt principle. With it, we describe the activity in different city zones and apply it to activity forecasting and anomaly detection e.g., strikes, and bad weather conditions. We compare our method with different models explicitly made for forecasting SARIMA models and Deep Learning Models. We find that MaxEnt models are highly predictive, outperforming SARIMAs and having similar results as a Neural Network. These results show how relevant statistical inference can be in building a robust and general model describing urban systems phenomena. This article identifies the significant observables for processes happening in the city, with the perspective of a deeper understanding of the fundamental forces driving its dynamics.
Robust Models are less OverConfident ; Despite the success of convolutional neural networks CNNs in many academic benchmarks for computer vision tasks, their application in the realworld is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training AT aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an indepth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing stateoftheart attacks and we show that AT has an interesting sideeffect it leads to models that are significantly less overconfident with their decisions, even on clean data than nonrobust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks like activation functions and pooling have a strong influence on the models' prediction confidences. Data Project website httpsgithub.comGeJuliarobustnessconfidencesevaluation
Modelrobust and efficient covariate adjustment for clusterrandomized experiments ; Clusterrandomized experiments are increasingly used to evaluate interventions in routine practice conditions, and researchers often adopt modelbased methods with covariate adjustment in the statistical analyses. However, the validity of modelbased covariate adjustment is unclear when the working models are misspecified, leading to ambiguity of estimands and risk of bias. In this article, we first adapt two conventional modelbased methods, generalized estimating equations and linear mixed models, with weighted gcomputation to achieve robust inference for clusteraverage and individualaverage treatment effects. To further overcome the limitations of modelbased covariate adjustment methods, we propose an efficient estimator for each estimand that allows for flexible covariate adjustment and additionally addresses cluster size variation dependent on treatment assignment and other cluster characteristics. Such cluster size variations often occur postrandomization and, if ignored, can lead to bias of modelbased estimators. For our proposed efficient covariateadjusted estimator, we prove that when the nuisance functions are consistently estimated by machine learning algorithms, the estimator is consistent, asymptotically normal, and efficient. When the nuisance functions are estimated via parametric working models, the estimator is triplyrobust. Simulation studies and analyses of three realworld clusterrandomized experiments demonstrate that the proposed methods are superior to existing alternatives.
Spatial and Statistical Modeling of MultiPanel Millimeter Wave SelfInterference ; Characterizing selfinterference is essential to the design and evaluation of inband fullduplex communication systems. Until now, little has been understood about this coupling in fullduplex systems operating at millimeter wave mmWave frequencies, and it has been shown that the highlyidealized models proposed for such do not align with practice. This work presents the first spatial and statistical model of mmWave selfinterference backed by measurements, enabling engineers to draw realizations that exhibit the largescale and smallscale spatial characteristics observed in our nearly 6.5 million measurements taken at 28 GHz. Core to our model is its use of system and model parameters having realworld meaning, which facilitates its extension to systems beyond our own phased array platform through proper parameterization. We demonstrate this by collecting nearly 13 million additional measurements to show that our model can generalize to two other system configurations. We assess our model by comparing it against actual measurements to confirm its ability to align spatially and in distribution with realworld selfinterference. In addition, using both measurements and our model of selfinterference, we evaluate an existing beamformingbased fullduplex mmWave solution to illustrate that our model can be reliably used to design new solutions and validate the performance improvements they may offer.
How do we get there Evaluating transformer neural networks as cognitive models for English past tense inflection ; There is an ongoing debate on whether neural networks can grasp the quasiregularities in languages like humans. In a typical quasiregularity task, English past tense inflections, the neural network model has long been criticized that it learns only to generalize the most frequent pattern, but not the regular pattern, thus can not learn the abstract categories of regular and irregular and is dissimilar to human performance. In this work, we train a set of transformer models with different settings to examine their behavior on this task. The models achieved high accuracy on unseen regular verbs and some accuracy on unseen irregular verbs. The models' performance on the regulars is heavily affected by type frequency and ratio but not token frequency and ratio, and vice versa for the irregulars. The different behaviors on the regulars and irregulars suggest that the models have some degree of symbolic learning on the regularity of the verbs. In addition, the models are weakly correlated with human behavior on nonce verbs. Although the transformer model exhibits some level of learning on the abstract category of verb regularity, its performance does not fit human data well, suggesting that it might not be a good cognitive model.
How Does a Deep Learning Model Architecture Impact Its Privacy A Comprehensive Study of Privacy Attacks on CNNs and Transformers ; As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, privacy concerns arise due to the potential leakage of sensitive information from the training data. Recent research has revealed that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. Notably, the efficacy of these attacks varies from model to model. In this paper, we answer a fundamental question Does model architecture affect model privacy By investigating representative model architectures from CNNs to Transformers, we demonstrate that Transformers generally exhibit higher vulnerability to privacy attacks compared to CNNs. Additionally, We identify the micro design of activation layers, stem layers, and LN layers, as major factors contributing to the resilience of CNNs against privacy attacks, while the presence of attention modules is another main factor that exacerbates the privacy vulnerability of Transformers. Our discovery reveals valuable insights for deep learning models to defend against privacy attacks and inspires the research community to develop privacyfriendly model architectures.
On CrossDomain PreTrained Language Models for Clinical Text Mining How Do They Perform on DataConstrained FineTuning ; Pretrained language models PLMs have been deployed in many natural language processing NLP tasks and in various domains. Language model pretraining from general or mixed domain rich data plus finetuning using small amounts of available data in a low resource domain demonstrated beneficial results by researchers. In this work, we question this statement and verify if BERTbased PLMs from the biomedical domain can perform well in clinical text mining tasks via finetuning. We test the stateoftheart models, i.e. Bioformer which is pretrained on a large amount of biomedical data from PubMed corpus. We use a historical n2c2 clinical NLP challenge dataset for finetuning its taskadapted version BioformerApt, and show that their performances are actually very low. We also present our own endtoend model, TransformerCRF, which is developed using Transformer and conditional random fields CRFs as encoder and decoder. We further create a new variation model by adding a CRF layer on top of PLM Bioformer BioformerCRF. We investigate the performances of TransformerCRF on clinical text mining tasks by training from scratch using a limited amount of data, as well as the model BioformerCRF. Experimental evaluation shows that, in a textitconstrained setting, all tested models are textitfar from ideal regarding extreme lowfrequency special token recognition, even though they can achieve relatively higher accuracy on overall text tagging. Our models including source codes will be hosted at urlhttpsgithub.compoethanTransformerCRF.
PostSelection Confidence Bounds for Prediction Performance ; In machine learning, the selection of a promising model from a potentially large number of competing models and the assessment of its generalization performance are critical tasks that need careful consideration. Typically, model selection and evaluation are strictly separated endeavors, splitting the sample at hand into a training, validation, and evaluation set, and only compute a single confidence interval for the prediction performance of the final selected model. We however propose an algorithm how to compute valid lower confidence bounds for multiple models that have been selected based on their prediction performances in the evaluation set by interpreting the selection problem as a simultaneous inference problem. We use bootstrap tilting and a maxTtype multiplicity correction. The approach is universally applicable for any combination of prediction models, any model selection strategy, and any prediction performance measure that accepts weights. We conducted various simulation experiments which show that our proposed approach yields lower confidence bounds that are at least comparably good as bounds from standard approaches, and that reliably reach the nominal coverage probability. In addition, especially when sample size is small, our proposed approach yields better performing prediction models than the default selection of only one model for evaluation does.
Private Semisupervised Knowledge Transfer for Deep Learning from Noisy Labels ; Deep learning models trained on largescale data have achieved encouraging performance in many realworld tasks. Meanwhile, publishing those models trained on sensitive datasets, such as medical records, could pose serious privacy concerns. To counter these issues, one of the current stateoftheart approaches is the Private Aggregation of Teacher Ensembles, or PATE, which achieved promising results in preserving the utility of the model while providing a strong privacy guarantee. PATE combines an ensemble of teacher models trained on sensitive data and transfers the knowledge to a student model through the noisy aggregation of teachers' votes for labeling unlabeled public data which the student model will be trained on. However, the knowledge or voted labels learned by the student are noisy due to private aggregation. Learning directly from noisy labels can significantly impact the accuracy of the student model. In this paper, we propose the PATE mechanism, which combines the current advanced noisy label training mechanisms with the original PATE framework to enhance its accuracy. A novel structure of Generative Adversarial Nets GANs is developed in order to integrate them effectively. In addition, we develop a novel noisy label detection mechanism for semisupervised model training to further improve student model performance when training with noisy labels. We evaluate our method on FashionMNIST and SVHN to show the improvements on the original PATE on all measures.
Treatment classification of posterior capsular opacification PCO using automated ground truths ; Determination of treatment need of posterior capsular opacification PCO one of the most common complication of cataract surgery is a difficult process due to its local unavailability and the fact that treatment is provided only after PCO occurs in the central visual axis. In this paper we propose a deep learning DLbased method to first segment PCO images then classify the images into textittreatment required and textitnot yet required cases in order to reduce frequent hospital visits. To train the model, we prepare a training image set with ground truths GT obtained from two strategies i manual and ii automated. So, we have two models i Model 1 trained with image set containing manual GT ii Model 2 trained with image set containing automated GT. Both models when evaluated on validation image set gave Dice coefficient value greater than 0.8 and intersectionoverunion IoU score greater than 0.67 in our experiments. Comparison between gold standard GT and segmented results from our models gave a Dice coefficient value greater than 0.7 and IoU score greater than 0.6 for both the models showing that automated ground truths can also result in generation of an efficient model. Comparison between our classification result and clinical classification shows 0.98 F2score for outputs from both the models.
Free field realization of the BMS Ising model ; In this work, we study the inhomogeneous BMS free fermion theory, and show that it gives a free field realization of the BMS Ising model. We find that besides the BMS symmetry there exists an anisotropic scaling symmetry in BMS free fermion theory. As a result, the symmetry of the theory gets enhanced to an infinite dimensional symmetry generated by a new type of BMSKacMoody algebra, different from the one found in the BMS free scalar model. Besides the different coupling of the u1 KacMoody current to the BMS algebra, the KacMoody level is nonvanishing now such that the corresponding modules are further enlarged to BMSKacMoody staggered modules. We show that there exists an underlying W2,2,1 structure in the operator product expansion of the currents, and the BMSKacMoody staggered modules can be viewed as highestweight modules of this Walgebra. Moreover we obtain the BMS Ising model by a fermionboson duality. This BMS Ising model is not a minimal model with respect to BMS3, since the minimal model construction based on BMS Kac determinant always leads to chiral Virasoro minimal models. Instead, the underlying algebra of the BMS Ising model is the W2,2,1algebra, which can be understood as a quantum conformal BMS3 algebra.
HigeNet A Highly Efficient Modeling for Long Sequence Time Series Prediction in AIOps ; Modern IT system operation demands the integration of system software and hardware metrics. As a result, it generates a massive amount of data, which can be potentially used to make datadriven operational decisions. In the basic form, the decision model needs to monitor a large set of machine data, such as CPU utilization, allocated memory, disk and network latency, and predicts the system metrics to prevent performance degradation. Nevertheless, building an effective prediction model in this scenario is rather challenging as the model has to accurately capture the longrange coupling dependency in the Multivariate TimeSeries MTS. Moreover, this model needs to have low computational complexity and can scale efficiently to the dimension of data available. In this paper, we propose a highly efficient model named HigeNet to predict the longtime sequence time series. We have deployed the HigeNet on production in the Dmatrix platform. We also provide offline evaluations on several publicly available datasets as well as one online dataset to demonstrate the model's efficacy. The extensive experiments show that training time, resource usage and accuracy of the model are found to be significantly better than five stateoftheart competing models.
DataCentric Debugging mitigating model failures via targeted data collection ; Deep neural networks can be unreliable in the real world when the training set does not adequately cover all the settings where they are deployed. Focusing on image classification, we consider the setting where we have an error distribution mathcalE representing a deployment scenario where the model fails. We have access to a small set of samples mathcalEsample from mathcalE and it can be expensive to obtain additional samples. In the traditional model development framework, mitigating failures of the model in mathcalE can be challenging and is often done in an ad hoc manner. In this paper, we propose a general methodology for model debugging that can systemically improve model performance on mathcalE while maintaining its performance on the original test set. Our key assumption is that we have access to a large pool of weakly noisily labeled data mathcalF. However, naively adding mathcalF to the training would hurt model performance due to the large extent of label noise. Our DataCentric Debugging DCD framework carefully creates a debugtrain set by selecting images from mathcalF that are perceptually similar to the images in mathcalEsample. To do this, we use the ell2 distance in the feature space penultimate layer activations of various models including ResNet, Robust ResNet and DINO where we observe DINO ViTs are significantly better at discovering similar images compared to Resnets. Compared to LPIPS, we find that our method reduces compute and storage requirements by 99.58. Compared to the baselines that maintain model performance on the test set, we achieve significantly 9.45 improved results on the debugheldout sets.
Conceptbased Explanations using Nonnegative Concept Activation Vectors and Decision Tree for CNN Models ; This paper evaluates whether training a decision tree based on concepts extracted from a conceptbased explainer can increase interpretability for Convolutional Neural Networks CNNs models and boost the fidelity and performance of the used explainer. CNNs for computer vision have shown exceptional performance in critical industries. However, it is a significant barrier when deploying CNNs due to their complexity and lack of interpretability. Recent studies to explain computer vision models have shifted from extracting lowlevel features pixelbased explanations to midor highlevel features conceptbased explanations. The current research direction tends to use extracted features in developing approximation algorithms such as linear or decision tree models to interpret an original model. In this work, we modify one of the stateoftheart conceptbased explanations and propose an alternative framework named TreeICE. We design a systematic evaluation based on the requirements of fidelity approximate models to original model's labels, performance approximate models to groundtruth labels, and interpretability meaningful of approximate models to humans. We conduct computational evaluation for fidelity and performance and human subject experiments for interpretability We find that TreeICE outperforms the baseline in interpretability and generates more human readable explanations in the form of a semantic tree structure. This work features how important to have more understandable explanations when interpretability is crucial.
Modeling Multivariate Biosignals With Graph Neural Networks and Structured State Space Models ; Multivariate biosignals are prevalent in many medical domains, such as electroencephalography, polysomnography, and electrocardiography. Modeling spatiotemporal dependencies in multivariate biosignals is challenging due to 1 longrange temporal dependencies and 2 complex spatial correlations between the electrodes. To address these challenges, we propose representing multivariate biosignals as timedependent graphs and introduce GraphS4mer, a general graph neural network GNN architecture that improves performance on biosignal classification tasks by modeling spatiotemporal dependencies in biosignals. Specifically, 1 we leverage the Structured State Space architecture, a stateoftheart deep sequence model, to capture longrange temporal dependencies in biosignals and 2 we propose a graph structure learning layer in GraphS4mer to learn dynamically evolving graph structures in the data. We evaluate our proposed model on three distinct biosignal classification tasks and show that GraphS4mer consistently improves over existing models, including 1 seizure detection from electroencephalographic signals, outperforming a previous GNN with selfsupervised pretraining by 3.1 points in AUROC; 2 sleep staging from polysomnographic signals, a 4.1 points improvement in macroF1 score compared to existing sleep staging models; and 3 12lead electrocardiogram classification, outperforming previous stateoftheart models by 2.7 points in macroF1 score.
Hybrid Learning of TimeSeries Inverse Dynamics Models for Locally Isotropic Robot Motion ; Applications of force control and motion planning often rely on an inverse dynamics model to represent the highdimensional dynamic behavior of robots during motion. The widespread occurrence of lowvelocity, smallscale, locally isotropic motion LIMO typically complicates the identification of appropriate models due to the exaggeration of dynamic effects and sensory perturbation caused by complex friction and phenomena of hysteresis, e.g., pertaining to joint elasticity. We propose a hybrid model learning base architecture combining a rigid body dynamics model identified by parametric regression and timeseries neural network architectures based on multilayerperceptron, LSTM, and Transformer topologies. Further, we introduce novel jointwise rotational history encoding, reinforcing temporal information to effectively model dynamic hysteresis. The models are evaluated on a KUKA iiwa 14 during algorithmically generated locally isotropic movements. Together with the rotational encoding, the proposed architectures outperform stateoftheart baselines by a magnitude of 103 yielding an RMSE of 0.14 Nm. Leveraging the hybrid structure and timeseries encoding capabilities, our approach allows for accurate torque estimation, indicating its applicability in critically forcesensitive applications during motion sequences exceeding the capacity of conventional inverse dynamics models while retaining trainability in face of scarce data and explainability due to the employed physics model prior.
A Generalized Analytical Model For Thermal And Bulk Comptonization In AccretionPowered XRay Pulsars ; We develop a new theoretical model describing the formation of the radiation spectrum in accretionpowered Xray pulsars as a result of bulk and thermal Comptonization of photons in the accretion column. The new model extends the previous model developed by the authors in four ways 1 we utilize a conical rather than cylindrical geometry; 2 the radiation components emitted from the column wall and the column top are computed separately; 3 the model allows for a nonzero impact velocity at the stellar surface; and 4 the velocity profile of the gas merges with Newtonian freefall far from the star. We show that these extensions allow the new model to simulate sources over a wide range of accretion rates. The model is based on a rigorous mathematical approach in which we obtain an exact series solution for the Green's function describing the reprocessing of monochromatic seed photons. Emergent spectra are then computed by convolving the Green's function with bremsstrahlung, cyclotron, and blackbody photon sources. The range of the new model is demonstrated via applications to the highluminosity source Her X1, and the lowluminosity source X Per. The new model suggests that the observed increase in spectral hardness associated with increasing luminosity in Her X1 may be due to a decrease in the surface impact velocity, which increases the PdV work done on the radiation field by the gas.
Prospects of Probing Dark Matter Condensates with Gravitational Waves ; The LambdaCold Dark Matter model explains cosmological observations most accurately till date. However, it is still plagued with various shortcomings at galactic scales. Models of dark matter such as superfluid dark matter, BoseEinstein CondensateBEC dark matter and fuzzy dark matter have been proposed to overcome some of these drawbacks. In this work, we probe these models using the current constraint on the gravitational wave GW propagation speed coming from the binary neutron star GW170817 detection by LIGOVirgo detector network and use it to study the allowed parameter space for these three models for Advanced LIGOVirgo, LISA, IPTA and SKA detection frequencies. The speed of GW has been shown to depend upon the refractive index of the medium, which in turn, depends on the dark matter model parameters through the density profile of the galactic halo. We constrain the parameter space for these models using the bounds coming from GW speed measurement and the Milky Way radius bound. Our findings suggest that with Advanced LIGOVirgo detector sensitivity, the three models considered here remain unconstrained. A meaningful constraint can only be obtained for detection frequencies leq 109 Hz, which falls in the detection range of radio telescopes such as IPTA and SKA. Considering this best possible case, we find that out of the three condensate models, the fuzzy dark matter model is the most feasible scenario to be falsified validated in near future.
SRoUDA Meta Selftraining for Robust Unsupervised Domain Adaptation ; As acquiring manual labels on data could be costly, unsupervised domain adaptation UDA, which transfers knowledge learned from a richlabel dataset to the unlabeled target dataset, is gaining increasing popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training AT methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta selftraining pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on selftraining paradigm, SRoUDA starts with pretraining a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation RMA, and then alternates between adversarial target model training on pseudolabeled target data and finetuning source model by a meta step. While selftraining allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the stateoftheart performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy. Code is available at httpsgithub.comVision.
Relative Errorbased Timelimited H2 Model Order Reduction via Oblique Projection ; In timelimited model order reduction, a reducedorder approximation of the original highorder model is obtained that accurately approximates the original model within the desired limited time interval. Accuracy outside that time interval is not that important. The error incurred when a reducedorder model is used as a surrogate for the original model can be quantified in absolute or relative terms to access the performance of the model reduction algorithm. The relative error is generally more meaningful than an absolute error because if the original and reduced systems' responses are of small magnitude, the absolute error is small in magnitude as well. However, this does not necessarily mean that the reduced model is accurate. The relative error in such scenarios is useful and meaningful as it quantifies percentage error irrespective of the magnitude of the system's response. In this paper, the necessary conditions for a local optimum of the timelimited H2 norm of the relative error system are derived. Inspired by these conditions, an oblique projection algorithm is proposed that ensures small H2norm relative error within the desired time interval. Unlike the existing relative errorbased model reduction algorithms, the proposed algorithm does not require solutions of largescale Lyapunov and Riccati equations. The proposed algorithm is compared with timelimited balanced truncation, timelimited balanced stochastic truncation, and timelimited iterative Rational Krylov algorithm. Numerical results confirm the superiority of the proposed algorithm over these existing algorithms.
Dark energy and matter interacting scenario can relieve H0 and S8 tensions ; In this work, we consider a new cosmological model named tildeLambdaCDM in which the vacuum energy interacts with matter and radiation, and test this model using the current cosmological observations. We find that this model can significantly relieve the H0 tension, and at the same time it can also slightly reduce the S8 tension, which cannot be easily observed in other cosmological models. Using the CMBBAOSN CBS data to constrain the model, we obtain the results of H070.61.41.7rmkms1 Mpc1 and S80.820pm 0.011, and thus the H0 and S8 tensions are relieved to 1.28sigma and 2.67sigma, respectively. However, in this case the tildeLambdaCDM model is not favored by the data, compared with LambdaCDM. We find that when the H0 and S8 data are added into the data combination, the situation is significantly improved. In the CBSH0 case, we obtain the result of H072.2pm 1.2 rm kms1Mpc1, which relieves the H0 tension to 0.53sigma, and in this case the model is favored over LambdaCDM. In the CBSH0S8 case, we get a synthetically best situation, H071.9pm 1.1 rm kms1Mpc1 and S80.8071pm 0.0099, in which the H0 and S8 tensions are relived to 0.75sigma and 2.09sigma, respectively. In this case, the model is most favored by the data. Therefore, such a cosmological model can greatly relieve the H0 tension, and at the same time it can also effectively alleviate the S8 tension.
PhysicsInformed Neural Networks for Prognostics and Health Management of LithiumIon Batteries ; For Prognostics and Health Management PHM of Lithiumion Liion batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there are no general and flexible methods to fuse the information represented by those models. PhysicsInformed Neural Network PINN is an efficient tool to fuse empirical or physical dynamic models with datadriven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semiempirical semiphysical Partial Differential Equation PDE to model the degradation dynamics of Liion batteries. When there is little prior knowledge about the dynamics, we leverage the datadriven Deep Hidden Physics Model DeepHPM to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertaintybased adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Liion Phosphate LFPgraphite batteries.
Individual frailty excess hazard models in cancer epidemiology ; Unobserved individual heterogeneity is a common challenge in population cancer survival studies. This heterogeneity is usually associated with the combination of model misspecification and the failure to record truly relevant variables. We investigate the effects of unobserved individual heterogeneity in the context of excess hazard models, one of the main tools in cancer epidemiology. We propose an individual excess hazard frailty model to account for individual heterogeneity. This represents an extension of frailty modelling to the relative survival framework. In order to facilitate the inference on the parameters of the proposed model, we select frailty distributions which produce closedform expressions of the marginal hazard and survival functions. The resulting model allows for an intuitive interpretation, in which the frailties induce a selection of the healthier individuals among survivors. We model the excess hazard using a flexible parametric model with a general hazard structure which facilitates the inclusion of timedependent effects. We illustrate the performance of the proposed methodology through a simulation study. We present a realdata example using data from lung cancer patients diagnosed in England, and discuss the impact of not accounting for unobserved heterogeneity on the estimation of net survival. The methodology is implemented in the R package IFNS.
Test of Time Instilling VideoLanguage Models with a Sense of Time ; Modelling and understanding time remains a challenge in contemporary video understanding models. With language emerging as a key driver towards powerful generalization, it is imperative for foundational videolanguage models to have a sense of time. In this paper, we consider a specific aspect of temporal understanding consistency of time order as elicited by beforeafter relations. We establish that seven existing videolanguage models struggle to understand even such simple temporal relations. We then question whether it is feasible to equip these foundational models with temporal awareness without retraining them from scratch. Towards this, we propose a temporal adaptation recipe on top of one such model, VideoCLIP, based on postpretraining on a small amount of videotext data. We conduct a zeroshot evaluation of the adapted models on six datasets for three downstream tasks which require varying degrees of time awareness. We observe encouraging performance gains especially when the task needs higher time awareness. Our work serves as a first step towards probing and instilling a sense of time in existing videolanguage models without the need for data and computeintense training from scratch.
TrojanPuzzle Covertly Poisoning CodeSuggestion Models ; With tools like GitHub Copilot, automatic code suggestion is no longer a dream in software engineering. These tools, based on large language models, are typically trained on massive corpora of code mined from unvetted public sources. As a result, these models are susceptible to data poisoning attacks where an adversary manipulates the model's training or finetuning phases by injecting malicious data. Poisoning attacks could be designed to influence the model's suggestions at run time for chosen contexts, such as inducing the model into suggesting insecure code payloads. To achieve this, prior poisoning attacks explicitly inject the insecure code payload into the training data, making the poisoning data detectable by static analysis tools that can remove such malicious data from the training set. In this work, we demonstrate two novel data poisoning attacks, COVERT and TROJANPUZZLE, that can bypass static analysis by planting malicious poisoning data in outofcontext regions such as docstrings. Our most novel attack, TROJANPUZZLE, goes one step further in generating less suspicious poisoning data by never including certain suspicious parts of the payload in the poisoned data, while still inducing a model that suggests the entire payload when completing code i.e., outside docstrings. This makes TROJANPUZZLE robust against signaturebased datasetcleansing methods that identify and filter out suspicious sequences from the training data. Our evaluation against two model sizes demonstrates that both COVERT and TROJANPUZZLE have significant implications for how practitioners should select code used to train or tune codesuggestion models.
Estimate Deformation Capacity of NonDuctile RC Shear Walls using Explainable Boosting Machine ; Machine learning is becoming increasingly prevalent for tackling challenges in earthquake engineering and providing fairly reliable and accurate predictions. However, it is mostly unclear how decisions are made because machine learning models are generally highly sophisticated, resulting in opaque blackbox models. Machine learning models that are naturally interpretable and provide their own decision explanation, rather than using an explanatory, are more accurate in determining what the model actually computes. With this motivation, this study aims to develop a fully explainable machine learning model to predict the deformation capacity of nonductile reinforced concrete shear walls based on experimental data collected worldwide. The proposed Explainable Boosting Machines EBMbased model is an interpretable, robust, naturally explainable glassbox model, yet provides high accuracy comparable to its blackbox counterparts. The model enables the user to observe the relationship between the wall properties and the deformation capacity by quantifying the individual contribution of each wall property as well as the correlations among them. The mean coefficient of determination R2 and the mean ratio of predicted to actual value based on the test dataset are 0.92 and 1.05, respectively. The proposed predictive model stands out with its overall consistency with scientific knowledge, practicality, and interpretability without sacrificing high accuracy.
Discrete parametric graphical models with Dirichlet type priors ; Typically, statistical graphical models are either continuous and parametric Gaussian, parameterized by the graphdependent precision matrix or discrete and nonparametric with graphdependent probabilities of cells. Eventually, the two types are mixed. We propose a way to break this dichotomy by introducing two discrete parametric graphical models on finite decomposable graphs the graph negative multinomial and the graph multinomial distributions. These models interpolate between the product of univariate negative multinomial and negative multinomial distributions, and between the product of binomial and multinomial distributions, respectively. We derive their Markov decomposition and present probabilistic models leading to both. Additionally, we introduce graphical versions of the Dirichlet distribution and inverted Dirichlet distribution, which serve as conjugate priors for the two discrete graphical Markov models. We derive explicit normalizing constants for both graphical Dirichlet laws and demonstrate that their independence structure a graphical version of neutrality yields a strong hyper Markov property for both Bayesian models. We also provide characterization theorems for the generalized Dirichlet distributions via strong hyper Markov property. Finally, we develop a Bayesian model selection procedure for the graphical negative multinomial model with respective Dirichlettype priors.
Unifying Molecular and Textual Representations via Multitask Language Modelling ; The recent advances in neural language models have also been successfully applied to the field of chemistry, offering generative solutions for classical problems in molecular design and synthesis planning. These new methods have the potential to fuel a new era of datadriven automation in scientific discovery. However, specialized models are still typically required for each task, leading to the need for problemspecific finetuning and neglecting task interrelations. The main obstacle in this field is the lack of a unified representation between natural language and chemical representations, complicating and limiting humanmachine interaction. Here, we propose the first multidomain, multitask language model that can solve a wide range of tasks in both the chemical and natural language domains. Our model can handle chemical and natural language concurrently, without requiring expensive pretraining on single domains or taskspecific models. Interestingly, sharing weights across domains remarkably improves our model when benchmarked against stateoftheart baselines on singledomain and crossdomain tasks. In particular, sharing information across domains and tasks gives rise to large improvements in crossdomain tasks, the magnitude of which increase with scale, as measured by more than a dozen of relevant metrics. Our work suggests that such models can robustly and efficiently accelerate discovery in physical sciences by superseding problemspecific finetuning and enhancing humanmodel interactions.
A modernday Mars climate in the Met Office Unified Model dry simulations ; We present results from the Met Office Unified Model UM, a worldleading climate and weather model, adapted to simulate a dry Martian climate. We detail the adaptation of the basic parameterisations and analyse results from two simulations, one with radiatively active mineral dust and one with radiatively inactive dust. These simulations demonstrate how the radiative effects of dust act to accelerate the winds and create a midaltitude isothermal layer during the dusty season. We validate our model through comparison with an established Mars model, the Laboratoire de M'et'eorologie Dynamique planetary climate model PCM, finding good agreement in the seasonal wind and temperature profiles but with discrepancies in the predicted dust mass mixing ratio and conditions at the poles. This study validates the use of the UM for a Martian atmosphere, highlighting how the adaptation of an Earth general circulation model GCM can be beneficial for existing Mars GCMs and provides insight into the next steps in our development of a new Mars climate model.
Universal Soldier Using Universal Adversarial Perturbations for Detecting Backdoor Attacks ; Deep learning models achieve excellent performance in numerous machine learning tasks. Yet, they suffer from securityrelated issues such as adversarial examples and poisoning backdoor attacks. A deep learning model may be poisoned by training with backdoored data or by modifying inner network parameters. Then, a backdoored model performs as expected when receiving a clean input, but it misclassifies when receiving a backdoored input stamped with a predesigned pattern called trigger. Unfortunately, it is difficult to distinguish between clean and backdoored models without prior knowledge of the trigger. This paper proposes a backdoor detection method by utilizing a special type of adversarial attack, universal adversarial perturbation UAP, and its similarities with a backdoor trigger. We observe an intuitive phenomenon UAPs generated from backdoored models need fewer perturbations to mislead the model than UAPs from clean models. UAPs of backdoored models tend to exploit the shortcut from all classes to the target class, built by the backdoor trigger. We propose a novel method called Universal Soldier for Backdoor detection USB and reverse engineering potential backdoor triggers via UAPs. Experiments on 345 models trained on several datasets show that USB effectively detects the injected backdoor and provides comparable or better results than stateoftheart methods.
Exploiting ExtensiveForm Structure in Empirical GameTheoretic Analysis ; Empirical gametheoretic analysis EGTA is a general framework for reasoning about complex games using agentbased simulation. Data from simulating select strategy profiles is employed to estimate a cogent and tractable game model approximating the underlying game. To date, EGTA methodology has focused on game models in normal form; though the simulations play out in sequential observations and decisions over time, the game model abstracts away this temporal structure. Richer models of textitextensiveform games EFGs provide a means to capture temporal patterns in action and information, using tree representations. We propose textittreeexploiting EGTA TEEGTA, an approach to incorporate EFG models into EGTA. TEEGTA constructs game models that express observations and temporal organization of activity, albeit at a coarser grain than the underlying agentbased simulation model. The idea is to exploit key structure while maintaining tractability. We establish theoretically and experimentally that exploiting even a little temporal structure can vastly reduce estimation error in strategyprofile payoffs compared to the normalform model. Further, we explore the implications of EFG models for iterative approaches to EGTA, where strategy spaces are extended incrementally. Our experiments on several game instances demonstrate that TEEGTA can also improve performance in the iterative setting, as measured by the quality of equilibrium approximation as the strategy spaces are expanded.
Model theory of probability spaces ; This expository paper treats the model theory of probability spaces using the framework of continuous 0,1valued first order logic. The metric structures discussed, which we call probability algebras, are obtained from probability spaces by identifying two measurable sets if they differ by a set of measure zero. The class of probability algebras is axiomatizable in continuous first order logic; we denote its theory by Pr. We show that the existentially closed structures in this class are exactly the ones in which the underlying probability space is atomless. This subclass is also axiomatizable; its theory APA is the model companion of Pr. We show that APA is separably categorical hence complete, has quantifier elimination, is omegastable, and has builtin canonical bases, and we give a natural characterization of its independence relation. For general probability algebras, we prove that the set of atoms enlarged by adding 0 is a definable set, uniformly in models of Pr. We use this fact as a basis for giving a complete treatment of the model theory of arbitrary probability spaces. The core of this paper is an extensive presentation of the main model theoretic properties of APA. We discuss Maharam's structure theorem for probability algebras, and indicate the close connections between the ideas behind it and model theory. We show how probabilistic entropy provides a rank connected to model theoretic forking in probability algebras. In the final section we mention some open problems.
Towards inferring network properties from epidemic data ; Epidemic propagation on networks represents an important departure from traditional massaction models. However, the highdimensionality of the exact models poses a challenge to both mathematical analysis and parameter inference. By using meanfield models, such as the pairwise model PWM, the complexity becomes tractable. While such models have been used extensively for model analysis, there is limited work in the context of statistical inference. In this paper, we explore the extent to which the PWM with the susceptibleinfectedrecovered SIR epidemic can be used to infer disease and networkrelated parameters. The widelyused MLE approach exhibits several issues pertaining to parameter unidentifiability and a lack of robustness to exact knowledge about key quantities such as population size andor proportion of under reporting. As an alternative, we considered the recently developed dynamical survival analysis DSA. For scenarios in which there is no model mismatch, such as when data are generated via simulations, both methods perform well despite strong dependence between parameters. However, for realworld data, such as footandmouth, H1N1 and COVID19, the DSA method appears more robust to potential model mismatch and the parameter estimates appear more epidemiologically plausible. Taken together, however, our findings suggest that networkbased meanfield models can be used to formulate approximate likelihoods which, coupled with an efficient inference scheme, make it possible to not only learn about the parameters of the disease dynamics but also that of the underlying network.
Examination of Nonlinear Longitudinal Processes with Latent Variables, Latent Processes, Latent Changes, and Latent Classes in the Structural Equation Modeling Framework The R package nlpsem ; We present the R package nlpsem, which provides a comprehensive set of functions to assess longitudinal processes with individual measurement occasions within the structural equation modeling SEM framework. This package focuses on providing computational tools for nonlinear longitudinal models, particularly intrinsically nonlinear models, across four distinct scenarios 1 univariate longitudinal processes captured by latent variables, with or without covariates, including timeinvariant covariates TICs and timevarying covariates TVCs; 2 multivariate longitudinal processes for evaluating correlations or causations between longitudinal variables; 3 multiplegroup frameworks for models in scenarios 1 and 2, enabling the examination of differences between manifested classes; and 4 mixture models for scenarios 1 and 2, assuming that trajectories originate from heterogeneous latent classes. By interfacing with the R package OpenMx, nlpsem enables flexible specification of structural equation models and generates maximum likelihood estimators using the full information maximum likelihood technique. The package includes an algorithm to obtain initial values from raw data, thereby facilitating computation and enhancing the likelihood of model convergence. Additionally, nlpsem provides functions for goodnessoffit analyses, clustering analyses, plots, and predicted trajectories. This paper constitutes a companion to the package, detailing each model scenario, the estimation technique, implementation details, output interpretation, and showcasing examples through a dataset on intelligence development.
Antithesis of Object Orientation OccurrenceOnly Modeling Applied in Engineering and Medicine ; This paper has a dual character, combining a philosophical ontological exploration with a conceptual modeling approach in systems and software engineering. Such duality is already practiced in software engineering, in which the current dominant modeling thesis is object orientation. This work embraces an antithesis that centers solely on the process rather than emphasizing the object. The approach is called occurrenceonly modeling, in which an occurrence means an event or process where a process is defined as an orchestrated net of events that form a semantical whole. In contrast to object orientation, in this occurrenceonly modeling objects are nothing more than long events. We apply this paradigm to 1 a UMLBPMN inventory system in simulation engineering and 2 an eventbased system that represents medical occurrences that occur on a timeline. The aim of such a venture is to enhance the field of conceptual modeling by adding yet a new alternative methodology and clarifying differences among approaches. Conceptual modeling s importance has been recognized in many research areas. An active research community in simulation engineering demonstrates the growing interest in conceptual modeling. In the clinical domains, temporal information elucidates the occurrence of medical events e.g., visits, laboratory tests. These applications give an opportunity to propose a new approach that includes a a Stoic ontology that has two types of being, existence and subsistence; b Thinging machines that limit activities to five generic actions; and c Lupascian logic, which handles negative events. With such a study, we aim to substantiate the assertion that the occurrence only approach is a genuine philosophical base for conceptual modeling. The results in this paper seem to support such a claim.
Channel Sparsity Variation and ModelBased Analysis on 6, 26, and 132 GHz Measurements ; In this paper, the level of sparsity is examined at 6, 26, and 132 GHz carrier frequencies by conducting channel measurements in an indoor office environment. By using the Gini index value between 0 and 1 as a metric for characterizing sparsity, we show that increasing carrier frequency leads to increased levels of sparsity. The measured channel impulse responses are used to derive a ThirdGeneration Partnership Project 3GPPstyle propagation model, used to calculate the Gini index for the comparison of the channel sparsity between the measurement and simulation based on the 3GPP model. Our results show that the mean value of the Gini index in measurement is over twice the value in simulation, implying that the 3GPP channel model does not capture the effects of sparsity in the delay domain as frequency increases. In addition, a new intracluster power allocation model based on measurements is proposed to characterize the effects of sparsity in the delay domain of the 3GPP channel model. The accuracy of the proposed model is analyzed using theoretical derivations and simulations. Using the derived intracluster power allocation model, the mean value of the Gini index is 0.97, while the spread of variability is restricted to 0.01, demonstrating that the proposed model is suitable for 3GPPtype channels. To our best knowledge, this paper is the first to perform measurements and analysis at three different frequencies for the evaluation of channel sparsity in the same environment.
Extended Excess Hazard Models for Spatially Dependent Survival Data ; Relative survival represents the preferred framework for the analysis of population cancer survival data. The aim is to model the survival probability associated to cancer in the absence of information about the cause of death. Recent data linkage developments have allowed for incorporating the place of residence or the place where patients receive treatment into the population cancer data bases; however, modeling this spatial information has received little attention in the relative survival setting. We propose a flexible parametric class of spatial excess hazard models along with inference tools, named Relative Survival Spatial General Hazard'' RSSGH, that allows for the inclusion of fixed and spatial effects in both timelevel and hazardlevel components. We illustrate the performance of the proposed model using an extensive simulation study, and provide guidelines about the interplay of sample size, censoring, and model misspecification. We present two case studies, using real data from colon cancer patients in England, aiming at answering epidemiological questions that require the use of a spatial model. These case studies illustrate how a spatial model can be used to identify geographical areas with low cancer survival, as well as how to summarize such a model through marginal survival quantities and spatial effects.
An evaluation of deep learning models for predicting water depth evolution in urban floods ; In this technical report we compare different deep learning models for prediction of water depth rasters at high spatial resolution. Efficient, accurate, and fast methods for water depth prediction are nowadays important as urban floods are increasing due to higher rainfall intensity caused by climate change, expansion of cities and changes in land use. While hydrodynamic models models can provide reliable forecasts by simulating water depth at every location of a catchment, they also have a high computational burden which jeopardizes their application to realtime prediction in large urban areas at high spatial resolution. Here, we propose to address this issue by using datadriven techniques. Specifically, we evaluate deep learning models which are trained to reproduce the data simulated by the CADDIES cellularautomata flood model, providing flood forecasts that can occur at different future time horizons. The advantage of using such models is that they can learn the underlying physical phenomena a priori, preventing manual parameter setting and computational burden. We perform experiments on a dataset consisting of two catchments areas within Switzerland with 18 simpler, short rainfall patterns and 4 long, more complex ones. Our results show that the deep learning models present in general lower errors compared to the other methods, especially for water depths 0.5m. However, when testing on more complex rainfall events or unseen catchment areas, the deep models do not show benefits over the simpler ones.
MLdriven Hardware Cost Model for MLIR ; During early optimization passes, compilers must make predictions for machinedependent characteristics such as execution unit utilization, number of register spills, latency, throughput etc. to generate better code. Often a handwritten staticanalytical hardware cost model is built into the compiler. However, the need for more sophisticated and varied predictions has become more pronounced with the development of deep learning compilers which need to optimize dataflow graphs. Such compilers usually employ a much higher level MLIR form as an IR representation before lowering to traditional LLVMIR. A staticanalytical cost model in such a scenario is cumbersome and error prone as the opcodes represent very high level algebraicarithmetic operations. Hence, we develop a machine learningbased cost model for highlevel MLIR which can predict different target variables of interest such as CPUGPUxPU utilization, instructions executed, register usage etc. By considering the incoming MLIR as a text input a la NLP models we can apply wellknown techniques from modern NLP research to help predict hardware characteristics more accurately. We expect such precise MLdriven hardware cost models to guide our deep learning compiler in graph level optimizations around operator fusion, local memory allocation, kernel scheduling etc. as well as in many kernellevel optimizations such as loop interchange, LICM and unroll. We report early workin progress results of developing such models on highlevel MLIR representing dataflow graphs emitted by PytorchTensorflowlike frameworks as well as lowerlevel dialects like affine. We show that these models can provide reasonably good estimates with low error bounds for various hardware characteristics of interest and can be a goto mechanism for hardware cost modelling in the future.
Steganography of Steganographic Networks ; Steganography is a technique for covert communication between two parties. With the rapid development of deep neural networks DNN, more and more steganographic networks are proposed recently, which are shown to be promising to achieve good performance. Unlike the traditional handcrafted steganographic tools, a steganographic network is relatively large in size. It raises concerns on how to covertly transmit the steganographic network in public channels, which is a crucial stage in the pipeline of steganography in real world applications. To address such an issue, we propose a novel scheme for steganography of steganographic networks in this paper. Unlike the existing steganographic schemes which focus on the subtle modification of the cover data to accommodate the secrets. We propose to disguise a steganographic network termed as the secret DNN model into a stego DNN model which performs an ordinary machine learning task termed as the stego task. During the model disguising, we select and tune a subset of filters in the secret DNN model to preserve its function on the secret task, where the remaining filters are reactivated according to a partial optimization strategy to disguise the whole secret DNN model into a stego DNN model. The secret DNN model can be recovered from the stego DNN model when needed. Various experiments have been conducted to demonstrate the advantage of our proposed method for covert communication of steganographic networks as well as general DNN models.
TextureBased Input Feature Selection for Action Recognition ; The performance of video action recognition has been significantly boosted by using motion representations within a twostream Convolutional Neural Network CNN architecture. However, there are a few challenging problems in action recognition in real scenarios, e.g., the variations in viewpoints and poses, and the changes in backgrounds. The domain discrepancy between the training data and the test data causes the performance drop. To improve the model robustness, we propose a novel method to determine the taskirrelevant content in inputs which increases the domain discrepancy. The method is based on a human parsing model HP model which jointly conducts dense correspondence labelling and semantic part segmentation. The predictions from the HP model also function as rerendering the human regions in each video using the same set of textures to make humans appearances in all classes be the same. A revised dataset is generated for training and testing and makes the action recognition model exhibit invariance to the irrelevant content in the inputs. Moreover, the predictions from the HP model are used to enrich the inputs to the AR model during both training and testing. Experimental results show that our proposed model is superior to existing models for action recognition on the HMDB51 dataset and the Penn Action dataset.
Demystifying What Code Summarization Models Learned ; Study patterns that models have learned has long been a focus of pattern recognition research. Explaining what patterns are discovered from training data, and how patterns are generalized to unseen data are instrumental to understanding and advancing the pattern recognition methods. Unfortunately, the vast majority of the application domains deal with continuous data i.e. statistical in nature out of which extracted patterns can not be formally defined. For example, in image classification, there does not exist a principle definition for a label of cat or dog. Even in natural language, the meaning of a word can vary with the context it is surrounded by. Unlike the aforementioned data format, programs are a unique data structure with a welldefined syntax and semantics, which creates a golden opportunity to formalize what models have learned from source code. This paper presents the first formal definition of patterns discovered by code summarization models i.e. models that predict the name of a method given its body, and gives a sound algorithm to infer a contextfree grammar CFG that formally describes the learned patterns. We realize our approach in PATIC which produces CFGs for summarizing the patterns discovered by code summarization models. In particular, we pick two prominent instances, code2vec and code2seq, to evaluate PATIC. PATIC shows that the patterns extracted by each model are heavily restricted to local, and syntactic code structures with little to none semantic implication. Based on these findings, we present two example uses of the formal definition of patterns a new method for evaluating the robustness and a new technique for improving the accuracy of code summarization models. Our work opens up this exciting, new direction of studying what models have learned from source code.