text
stringlengths
62
2.94k
Quantum trimer models and topological SU3 spin liquids on the kagome lattice ; We construct and study quantum trimer models and resonating SU3singlet models on the kagome lattice, which generalize quantum dimer models and the Resonating Valence Bond wavefunctions to a trimer and SU3 setting. We demonstrate that these models carry a Z3 symmetry which originates in the structure of trimers and the SU3 representation theory, and which becomes the only symmetry under renormalization. Based on this, we construct simple and exact parent Hamiltonians for the model which exhibit a topological 9fold degenerate ground space. A combination of analytical reasoning and numerical analysis reveals that the quantum order ultimately displayed by the model depends on the relative weight assigned to different types of trimers it can display either Z3 topological order or form a symmetrybroken trimer crystal, and in addition possesses a point with an enhanced U1 symmetry and critical behavior. Our results accordingly hold for the SU3 model, where the two natural choices for trimer weights give rise to either a topological spin liquid or a system with symmetrybroken order, respectively. Our work thus demonstrates the suitability of resonating trimer and SU3singlet ansatzes to model SU3 topological spin liquids on the kagome lattice.
Efficient ModelBased Reinforcement Learning through Optimistic Policy Search and Planning ; Modelbased reinforcement learning algorithms with probabilistic dynamical models are amongst the most dataefficient learning methods. This is often attributed to their ability to distinguish between epistemic and aleatoric uncertainty. However, while most algorithms distinguish these two uncertainties for learning the model, they ignore it when optimizing the policy, which leads to greedy and insufficient exploration. At the same time, there are no practical solvers for optimistic exploration algorithms. In this paper, we propose a practical optimistic exploration algorithm HUCRL. HUCRL reparameterizes the set of plausible models and hallucinates control directly on the epistemic uncertainty. By augmenting the input space with the hallucinated inputs, HUCRL can be solved using standard greedy planners. Furthermore, we analyze HUCRL and construct a general regret bound for wellcalibrated models, which is provably sublinear in the case of Gaussian Process models. Based on this theoretical foundation, we show how optimistic exploration can be easily combined with stateoftheart reinforcement learning algorithms and different probabilistic models. Our experiments demonstrate that optimistic exploration significantly speedsup learning when there are penalties on actions, a setting that is notoriously difficult for existing modelbased reinforcement learning algorithms.
Differentiable Segmentation of Sequences ; Segmented models are widely used to describe nonstationary sequential data with discrete change points. Their estimation usually requires solving a mixed discretecontinuous optimization problem, where the segmentation is the discrete part and all other model parameters are continuous. A number of estimation algorithms have been developed that are highly specialized for their specific model assumptions. The dependence on nonstandard algorithms makes it hard to integrate segmented models in stateoftheart deep learning architectures that critically depend on gradientbased optimization techniques. In this work, we formulate a relaxed variant of segmented models that enables joint estimation of all model parameters, including the segmentation, with gradient descent. We build on recent advances in learning continuous warping functions and propose a novel family of warping functions based on the twosided power TSP distribution. TSPbased warping functions are differentiable, have simple closedform expressions, and can represent segmentation functions exactly. Our formulation includes the important class of segmented generalized linear models as a special case, which makes it highly versatile. We use our approach to model the spread of COVID19 with Poisson regression, apply it on a change point detection task, and learn classification models with concept drift. The experiments show that our approach effectively learns all these tasks with standard algorithms for gradient descent.
Time Series Analysis and Forecasting of COVID19 Cases Using LSTM and ARIMA Models ; Coronavirus disease 2019 COVID19 is a global public health crisis that has been declared a pandemic by World Health Organization. Forecasting countrywise COVID19 cases is necessary to help policymakers and healthcare providers prepare for the future. This study explores the performance of several Long ShortTerm Memory LSTM models and AutoRegressive Integrated Moving Average ARIMA model in forecasting the number of confirmed COVID19 cases. Time series of daily cumulative COVID19 cases were used for generating 1day, 3day, and 5day forecasts using several LSTM models and ARIMA. Two novel kperiod performance metrics kday Mean Absolute Percentage Error kMAPE and kday Median Symmetric Accuracy kMdSA were developed for evaluating the performance of the models in forecasting time series values for multiple days. Errors in prediction using kMAPE and kMdSA for LSTM models were both as low as 0.05, while those for ARIMA were 0.07 and 0.06 respectively. LSTM models slightly underestimated while ARIMA slightly overestimated the numbers in the forecasts. The performance of LSTM models is comparable to ARIMA in forecasting COVID19 cases. While ARIMA requires longer sequences, LSTMs can perform reasonably well with sequence sizes as small as 3. However, LSTMs require a large number of training samples. Further, the development of kperiod performance metrics proposed is likely to be useful for performance evaluation of time series models in predicting multiple periods. Based on the kperiod performance metrics proposed, both LSTMs and ARIMA are useful for time series analysis and forecasting for COVID19.
A Distributed Model Predictive Wind Farm Controller for Active Power Control ; Due to the fluctuating nature of the wind and the increasing use of wind energy as a power source, wind power will have an increasing negative influence on the stability of the power grid. In this paper, a model predictive control strategy is introduced that not only stabilizes the power produced by wind farms, but also creates the possibility to perform power reference tracking with wind farms. With power reference tracking, it is possible for grid operators to adapt the power production to a change in the power demand and to counteract fluctuations that are introduced by other power generators. In this way, wind farms can actually contribute to the stabilization of the power grid when this is necessary instead of negatively influencing it. A lowfidelity controloriented wind farm model is developed and employed in the developed distributed model predictive controller. In this control model, the wake dynamics are taken into account and consequently, the model's order is relatively large. This makes it, from a computational point of view, challenging for a centralized model predictive control to provide realtime control for large wind farms. Therefore, the controller proposed in this paper is a distributed model predictive control. Here, the central control problem is divided into smaller local control problems that are solved in parallel on local controllers, which significantly reduces the computational complexity and brings the application of model predictive control in a wind farm a step closer to practical implementation. The proposed control solution is tested in simulations on a 10 and 64 turbine wind farm.
Exploring Deep Hybrid TensortoVector Network Architectures for Regression Based Speech Enhancement ; This paper investigates different tradeoffs between the number of model parameters and enhanced speech qualities by employing several deep tensortovector regression models for speech enhancement. We find that a hybrid architecture, namely CNNTT, is capable of maintaining a good quality performance with a reduced model parameter size. CNNTT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality and a tensortrain TT output layer on the top to reduce model parameters. We first derive a new upper bound on the generalization power of the convolutional neural network CNN based vectortovector regression models. Then, we provide experimental evidence on the Edinburgh noisy speech corpus to demonstrate that, in singlechannel speech enhancement, CNN outperforms DNN at the expense of a small increment of model sizes. Besides, CNNTT slightly outperforms the CNN counterpart by utilizing only 32 of the CNN model parameters. Besides, further performance improvement can be attained if the number of CNNTT parameters is increased to 44 of the CNN model size. Finally, our experiments of multichannel speech enhancement on a simulated noisy WSJ0 corpus demonstrate that our proposed hybrid CNNTT architecture achieves better results than both DNN and CNN models in terms of betterenhanced speech qualities and smaller parameter sizes.
Regularized Bayesian calibration and scoring of the WDFAB IRT model improves predictive performance over marginal maximum likelihood ; Item response theory IRT is the statistical paradigm underlying a dominant family of generative probabilistic models for test responses, used to quantify traits in individuals relative to target populations. The graded response model GRM is a particular IRT model that is used for ordered polytomous test responses. Both the development and the application of the GRM and other IRT models require statistical decisions. For formulating these models calibration, one needs to decide on methodologies for item selection, inference, and regularization. For applying these models test scoring, one needs to make similar decisions, often prioritizing computational tractability andor interpretability. In many applications, such as in the Work Disability Functional Assessment Battery WDFAB, tractability implies approximating an individual's score distribution using estimates of mean and variance, and obtaining that score conditional on only point estimates of the calibrated model. In this manuscript, we evaluate the calibration and scoring of models under this common usecase using Bayesian crossvalidation. Applied to the WDFAB responses collected for the National Institutes of Health, we assess the predictive power of implementations of the GRM based on their ability to yield, on validation sets of respondents, ability estimates that are most predictive of patterns of item responses. Our main finding indicates that regularized Bayesian calibration of the GRM outperforms the regularizationfree empirical Bayesian procedure of marginal maximum likelihood. We also motivate the use of compactly supported priors in test scoring.
Nonanchorbased vehicle detection for traffic surveillance using bounding ellipses ; Cameras for traffic surveillance are usually polemounted and produce images that reflect a birdseye view. Vehicles in such images, in general, assume an ellipse form. A bounding box for the vehicles usually includes a large empty space when the vehicle orientation is not parallel to the edges of the box. To circumvent this problem, the present study applied bounding ellipses to a nonanchorbased, singleshot detection model CenterNet. Since this model does not depend on anchor boxes, nonmax suppression NMS that requires computing the intersection over union IOU between predicted bounding boxes is unnecessary for inference. The SpotNet that extends the CenterNet model by adding a segmentation head was also tested with bounding ellipses. Two other anchorbased, singleshot detection models YOLO4 and SSD were chosen as references for comparison. The model performance was compared based on a local dataset that was doubly annotated with bounding boxes and ellipses. As a result, the performance of the two models with bounding ellipses exceeded that of the reference models with bounding boxes. When the backbone of the ellipse models was pretrained on an open dataset UADETRAC, the performance was further enhanced. The data augmentation schemes developed for YOLO4 also improved the performance of the proposed models. As a result, the best mAP score of a CenterNet with bounding ellipses exceeds 0.9.
AutoSTGCN Autonomous SpatialTemporal Graph Convolutional Network Search Based on Reinforcement Learning and Existing Research Results ; In recent years, many spatialtemporal graph convolutional network STGCN models are proposed to deal with the spatialtemporal network data forecasting problem. These STGCN models have their own advantages, i.e., each of them puts forward many effective operations and achieves good prediction results in the real applications. If users can effectively utilize and combine these excellent operations integrating the advantages of existing models, then they may obtain more effective STGCN models thus create greater value using existing work. However, they fail to do so due to the lack of domain knowledge, and there is lack of automated system to help users to achieve this goal. In this paper, we fill this gap and propose AutoSTGCN algorithm, which makes use of existing models to automatically explore highperformance STGCN model for specific scenarios. Specifically, we design UnifiedSTGCN framework, which summarizes the operations of existing architectures, and use parameters to control the usage and characteristic attributes of each operation, so as to realize the parameterized representation of the STGCN architecture and the reorganization and fusion of advantages. Then, we present AutoSTGCN, an optimization method based on reinforcement learning, to quickly search the parameter search space provided by UnifiedSTGCN, and generate optimal STGCN models automatically. Extensive experiments on realworld benchmark datasets show that our AutoSTGCN can find STGCN models superior to existing STGCN models with heuristic parameters, which demonstrates the effectiveness of our proposed method.
Design Ontology Supporting Modelbased Systemsengineering Formalisms ; Modelbased systems engineering MBSE provides an important capability for managing the complexities of system development. MBSE empowers the formalisms of system architectures for supporting modelbased requirement elicitation, specification, design, development, testing, fielding, etc. However, the modeling languages and techniques are quite heterogeneous, even within the same enterprise system, which creates difficulties for data interoperability. The discrepancies among data structures and language syntaxes make information exchange among MBSE models even more difficult, resulting in considerable information deviations when connecting data flows across the enterprise. For this reason, this paper presents an ontology based upon graphs, objects, points, properties, roles, and relationships with entensions GOPPRRE, providing meta models that support the various lifecycle stages of MBSE formalisms. In particular, knowledgegraph models are developed to support unified model representations to further implement ontological data integration based on GOPPRRE throughout the entire lifecycle. The applicability of the MBSE formalism is verified using quantitative and qualitative approaches. Moreover, the GOPPRRE ontologies are generated from the MBSE language formalisms in a domainspecific modeling tool, textitMetaGraph in order to evaluate its availiablity. The results demonstrate that the proposed ontology supports both formal structures and the descriptive logic of the systems engineering lifecycle.
Deep Submodular Networks for Extractive Data Summarization ; Deep Models are increasingly becoming prevalent in summarization problems e.g. document, video and images due to their ability to learn complex feature interactions and representations. However, they do not model characteristics such as diversity, representation, and coverage, which are also very important for summarization tasks. On the other hand, submodular functions naturally model these characteristics because of their diminishing returns property. Most approaches for modelling and learning submodular functions rely on very simple models, such as weighted mixtures of submodular functions. Unfortunately, these models only learn the relative importance of the different submodular functions such as diversity, representation or importance, but cannot learn more complex feature representations, which are often required for stateoftheart performance. We propose Deep Submodular Networks DSN, an endtoend learning framework that facilitates the learning of more complex features and richer functions, crafted for better modelling of all aspects of summarization. The DSN framework can be used to learn features appropriate for summarization from scratch. We demonstrate the utility of DSNs on both generic and query focused imagecollection summarization, and show significant improvement over the stateoftheart. In particular, we show that DSNs outperform simple mixture models using off the shelf features. Secondly, we also show that just using four submodular functions in a DSN with endtoend learning performs comparably to the stateoftheart mixture model with a handcrafted set of 594 components and outperforms other methods for image collection summarization.
Crystal plasticity modeling of nonSchmid yield behavior from Ni3Al single crystals to Nibased superalloys ; A Crystal Plasticity Finite Element CPFE framework is proposed for modeling the nonSchmid yield behavior of L12 type Ni3Al crystals and Nibased superalloys. This framework relies on the estimation of the nonSchmid model parameters directly from the orientation and temperaturedependent experimental yield stress data. The inelastic deformation model for Ni3Al crystals is extended to the precipitate phase of Nibased superalloys in a homogenized dislocation density based crystal plasticity framework. The framework is used to simulate the orientation and temperaturedependent yield of Ni3Al crystals and single crystal Nibased superalloy, CMSX4, in the temperature range 2601304 K. Model predictions of the yield stress are in general agreement with experiments. Model predictions are also made regarding the tensioncompression asymmetry and the dominant slip mechanism at yield over the standard stereographic triangle at various temperatures for both these materials. These predictions provide valuable insights regarding the underlying orientation and temperaturedependent slip mechanisms at yield. In this regard, the nonSchmid model may also serve as a standalone analytical model for predicting the yield stress, the tensioncompression asymmetry and the underlying slip mechanism at yield as a function of orientation and temperature.
Robust Optimization Approaches for Portfolio Selection A Computational and Comparative Analysis ; The field of portfolio selection is an active research topic, which combines elements and methodologies from various fields, such as optimization, decision analysis, risk management, data science, forecasting, etc. The modeling and treatment of deep uncertainties for future asset returns is a major issue for the success of analytical portfolio selection models. Recently, robust optimization RO models have attracted a lot of interest in this area. RO provides a computationally tractable framework for portfolio optimization based on relatively general assumptions on the probability distributions of the uncertain risk parameters. Thus, RO extends the framework of traditional linear and nonlinear models e.g., the wellknown meanvariance model, incorporating uncertainty through a formal and analytical approach into the modeling process. Robust counterparts of existing models can be considered as worstcase reformulations as far as deviations of the uncertain parameters from their nominal values are concerned. Although several RO models have been proposed in the literature focusing on various risk measures and different types of uncertainty sets about asset returns, analytical empirical assessments of their performance have not been performed in a comprehensive manner. The objective of this study is to fill in this gap in the literature. More specifically, we consider different types of RO models based on popular risk measures and conduct an extensive comparative analysis of their performance using data from the US market during the period 20052016.
Financial Data Analysis Using Expert Bayesian Framework For Bankruptcy Prediction ; In recent years, bankruptcy forecasting has gained lot of attention from researchers as well as practitioners in the field of financial risk management. For bankruptcy prediction, various approaches proposed in the past and currently in practice relies on accounting ratios and using statistical modeling or machine learning methods. These models have had varying degrees of successes. Models such as Linear Discriminant Analysis or Artificial Neural Network employ discriminative classification techniques. They lack explicit provision to include prior expert knowledge. In this paper, we propose another route of generative modeling using Expert Bayesian framework. The biggest advantage of the proposed framework is an explicit inclusion of expert judgment in the modeling process. Also the proposed methodology provides a way to quantify uncertainty in prediction. As a result the model built using Bayesian framework is highly flexible, interpretable and intuitive in nature. The proposed approach is well suited for highly regulated or safety critical applications such as in finance or in medical diagnosis. In such cases accuracy in the prediction is not the only concern for decision makers. Decision makers and other stakeholders are also interested in uncertainty in the prediction as well as interpretability of the model. We empirically demonstrate these benefits of proposed framework on real world dataset using Stan, a probabilistic programming language. We found that the proposed model is either comparable or superior to the other existing methods. Also resulting model has much less False Positive Rate compared to many existing state of the art methods. The corresponding R code for the experiments is available at Github repository.
A general modelling framework for open wildlife populations based on the Polya Tree prior ; Wildlife monitoring for open populations can be performed using a number of different survey methods. Each survey method gives rise to a type of data and, in the last five decades, a large number of associated statistical models have been developed for analysing these data. Although these models have been parameterised and fitted using different approaches, they have all been designed to model the pattern with which individuals enter and exit the population and to estimate the population size. However, existing approaches rely on a predefined model structure and complexity, either by assuming that parameters are specific to sampling occasions, or by employing parametric curves. Instead, we propose a novel Bayesian nonparametric framework for modelling entry and exit patterns based on the Polya Tree PT prior for densities. Our Bayesian nonparametric approach avoids overfitting when inferring entry and exit patterns while simultaneously allowing more flexibility than is possible using parametric curves. We apply our new framework to capturerecapture, count and ringrecovery data and we introduce the replicated PT prior for defining classes of models for these data. Additionally, we define the Hierarchical Logistic PT prior for jointly modelling related data and we consider the Optional PT prior for modelling long time series of data. We demonstrate our new approach using five different case studies on birds, amphibians and insects.
UserDependent Neural Sequence Models for ContinuousTime Event Data ; Continuoustime event data are common in applications such as individual behavior data, financial transactions, and medical health records. Modeling such data can be very challenging, in particular for applications with many different types of events, since it requires a model to predict the event types as well as the time of occurrence. Recurrent neural networks that parameterize timevarying intensity functions are the current stateoftheart for predictive modeling with such data. These models typically assume that all event sequences come from the same data distribution. However, in many applications event sequences are generated by different sources, or users, and their characteristics can be very different. In this paper, we extend the broad class of neural marked point process models to mixtures of latent embeddings, where each mixture component models the characteristic traits of a given user. Our approach relies on augmenting these models with a latent variable that encodes user characteristics, represented by a mixture model over user behavior that is trained via amortized variational inference. We evaluate our methods on four large realworld datasets and demonstrate systematic improvements from our approach over existing work for a variety of predictive metrics such as loglikelihood, next event ranking, and sourceofsequence identification.
A fast timestepping strategy for dynamical systems equipped with a surrogate model ; Simulation of complex dynamical systems arising in many applications is computationally challenging due to their size and complexity. Model order reduction, machine learning, and other types of surrogate modeling techniques offer cheaper and simpler ways to describe the dynamics of these systems but are inexact and introduce additional approximation errors. In order to overcome the computational difficulties of the full complex models, on one hand, and the limitations of surrogate models, on the other, this work proposes a new accelerated timestepping strategy that combines information from both. This approach is based on the multirate infinitesimal generalstructure additive RungeKutta MRIGARK framework. The inexpensive surrogate model is integrated with a small timestep to guide the solution trajectory, and the full model is treated with a large timestep to occasionally correct for the surrogate model error and ensure convergence. We provide a theoretical error analysis, and several numerical experiments, to show that this approach can be significantly more efficient than using only the full or only the surrogate model for the integration.
SkyrmeHartreeFockBogoliubov mass models on a 3D Mesh Effect of triaxial shape ; The modeling of nuclear reactions and radioactive decays in astrophysical or earthbased conditions requires detailed knowledge of the masses of essentially all nuclei. Microscopic mass models based on nuclear energy density functionals EDFs can be descriptive and used to provide this information. The concept of intrinsic symmetry breaking is central to the predictive power of EDF approaches, yet is generally not exploited to the utmost by mass models because of the computational demands of adjusting up to about two dozen parameters to thousands of nuclear masses. We report on a first step to bridge the gap between what is presently feasible for studies of individual nuclei and largescale models we present a new SkyrmeEDFbased model that was adjusted using a threedimensional coordinatespace representation, for the first time allowing for both axial and triaxial deformations during the adjustment process. To compensate for the substantial increase in computational cost brought by the latter, we have employed a committee of multilayer neural networks to model the objective function in parameter space and guide us towards the overall best fit. The resulting mass model BSkG1 is computed with the EDF model independently of the neural network. It yields a root mean square rms deviation on the 2457 known masses of 741 keV and an rms deviation on the 884 measured charge radii of 0.024 fm.
CASU2Net Cascaded Unification Network by a Twostep Early Fusion for Fault Detection in Offshore Wind Turbines ; This paper presents a novel feature fusionbased deep learning model called CASU2Net for fault detection in offshore wind turbines. The proposed CASU2Net model benefits of a twostep early fusion to enrich features in the final stage. Moreover, since previous studies did not consider uncertainty while model developing and also predictions, we take advantage of Monte Carlo dropout MC dropout to enhance the certainty of the results. To design fault detection model, we use five sensors and a sliding window to exploit the inherent temporal information contained in the raw timeseries data obtained from sensors. The proposed model uses the nonlinear relationships among multiple sensor variables and the temporal dependency of each sensor on others which considerably increases the performance of fault detection model. A 10fold crossvalidation approach is used to verify the generalization of the model and evaluate the classification metrics. To evaluate the performance of the model, simulated data from a benchmark floating offshore wind turbine FOWT with supervisory control and data acquisition SCADA are used. The results illustrate that the proposed model would accurately disclose and classify more than 99 of the faults. Moreover, it is generalizable and can be used to detect faults for different types of systems.
CoRe An Efficient Coarserefined Training Framework for BERT ; In recent years, BERT has made significant breakthroughs on many natural language processing tasks and attracted great attentions. Despite its accuracy gains, the BERT model generally involves a huge number of parameters and needs to be trained on massive datasets, so training such a model is computationally very challenging and timeconsuming. Hence, training efficiency should be a critical issue. In this paper, we propose a novel coarserefined training framework named CoRe to speed up the training of BERT. Specifically, we decompose the training process of BERT into two phases. In the first phase, by introducing fast attention mechanism and decomposing the large parameters in the feedforward network sublayer, we construct a relaxed BERT model which has much less parameters and much lower model complexity than the original BERT, so the relaxed model can be quickly trained. In the second phase, we transform the trained relaxed BERT model into the original BERT and further retrain the model. Thanks to the desired initialization provided by the relaxed model, the retraining phase requires much less training steps, compared with training an original BERT model from scratch with a random initialization. Experimental results show that the proposed CoRe framework can greatly reduce the training time without reducing the performance.
Use the Spear as a Shield A Novel Adversarial Example based PrivacyPreserving Technique against Membership Inference Attacks ; Recently, the membership inference attack poses a serious threat to the privacy of confidential training data of machine learning models. This paper proposes a novel adversarial example based privacypreserving technique AEPPT, which adds the crafted adversarial perturbations to the prediction of the target model to mislead the adversary's membership inference model. The added adversarial perturbations do not affect the accuracy of target model, but can prevent the adversary from inferring whether a specific data is in the training set of the target model. Since AEPPT only modifies the original output of the target model, the proposed method is general and does not require modifying or retraining the target model. Experimental results show that the proposed method can reduce the inference accuracy and precision of the membership inference model to 50, which is close to a random guess. Further, for those adaptive attacks where the adversary knows the defense mechanism, the proposed AEPPT is also demonstrated to be effective. Compared with the stateoftheart defense methods, the proposed defense can significantly degrade the accuracy and precision of membership inference attacks to 50 i.e., the same as a random guess while the performance and utility of the target model will not be affected.
Directional Radio Propagation Path Loss Models for MillimeterWave Wireless Networks in the 28, 60, and 73GHz Bands ; Fifthgeneration 5G cellular systems are likely to operate in the centimeterwave 330 GHz and millimeterwave 30300 GHz frequency bands, where a vast amount of underutilized bandwidth exists worldwide. To assist in the research and development of these emerging wireless systems, a myriad of measurement studies have been conducted to characterize path loss in urban environments at these frequencies. The standard theoretical free space FS and Stanford University Interim SUI empirical path loss models were recently modified to fit path loss models obtained from measurements performed at 28 GHz and 38 GHz, using simple correction factors. In this paper, we provide similar correction factors for models at 60 GHz and 73 GHz. By imparting slope correction factors on the FS and SUI path loss models to closely match the closein CI free space reference distance path loss models, millimeterwave path loss can be accurately estimated with popular models for 5G cellular planning at 60 GHz and 73 GHz. Additionally, new millimeterwave beam combining path loss models are provided at 28 GHz and 73 GHz by considering the simultaneous combination of signals from multiple antenna pointing directions between the transmitter and receiver that result in the strongest received power. Such directional channel models are important for future adaptive array systems at millimeterwave frequencies.
MetaKD A Meta Knowledge Distillation Framework for Language Model Compression across Domains ; Pretrained language models have been applied to various NLP tasks with considerable performance gains. However, the large model sizes, together with the long inference time, limit the deployment of such models in realtime applications. One line of model compression approaches considers knowledge distillation to distill large teacher models into small student models. Most of these studies focus on singledomain only, which ignores the transferable knowledge from other domains. We notice that training a teacher with transferable knowledge digested across domains can achieve better generalization capability to help knowledge distillation. Hence we propose a MetaKnowledge Distillation MetaKD framework to build a metateacher model that captures transferable knowledge across domains and passes such knowledge to students. Specifically, we explicitly force the metateacher to capture transferable knowledge at both instancelevel and featurelevel from multiple domains, and then propose a metadistillation algorithm to learn singledomain student models with guidance from the metateacher. Experiments on public multidomain NLP tasks show the effectiveness and superiority of the proposed MetaKD framework. Further, we also demonstrate the capability of MetaKD in the settings where the training data is scarce.
Constraints on the time variation of the speed of light using Pantheon dataset ; Both the absolute magnitude of type Ia supernovae SNe Ia and the luminosity distance of them are modified in the context of the minimally extended varying speed of light meVSL model compared to those of general relativity GR. We have analyzed the likelihood of various dark energy models under meVSL by using the Pantheon SNe Ia data. Both omegaCDM and CPL parameterization dark energy models indicate a cosmic variation of the speed of light at the 1sigma level. For Omegatextm 0 0.30, 0.31, and 0.32 with omega0 ,, omegaa 1 ,, 0, 1sigma range of dottildec0tildec0 , 1013 , textyr1 are 8.76 ,, 0.89, 11.8 ,, 3.93, and 14.8 ,, 6.98, respectively. Meanwhile, 1sigma range of dottildec0tildec0 1012 , textyr1 for the CPL dark energy models with 1.05 leq omega0 leq 0.95 and 0.28 leq Omegatextm 0 leq 0.32, are 6.31,, 2.98. The value of tildec at z 3 can be larger than that of the present by 0.2 sim 3 for omegaCDM models and 5 sim 13 for CPL models. We also obtain 25.6 leq dottildeG0tildeG0 , 1012 , textyr1 leq 0.36 for viable models except for CPL model for Omegatextm 0 0.28. We obtain the increasing rate of the gravitational constant as 1.65 leq dottildeG0tildeG0 , 1012 , textyr1 leq 3.79 for that model.
Evolutionary Multiobjective Architecture Search Framework Application to COVID19 3D CT Classification ; The COVID19 pandemic has threatened global health. Many studies have applied deep convolutional neural networks CNN to recognize COVID19 based on chest 3D computed tomography CT. Recent works show that no model generalizes well across CT datasets from different countries, and manually designing models for specific datasets requires expertise; thus, neural architecture search NAS that aims to search models automatically has become an attractive solution. To reduce the search cost on large 3D CT datasets, most NASbased works use the weightsharing WS strategy to make all models share weights within a supernet; however, WS inevitably incurs search instability, leading to inaccurate model estimation. In this work, we propose an efficient Evolutionary Multiobjective ARchitecture Search EMARS framework. We propose a new objective, namely potential, which can help exploit promising models to indirectly reduce the number of models involved in weights training, thus alleviating search instability. We demonstrate that under objectives of accuracy and potential, EMARS can balance exploitation and exploration, i.e., reducing search time and finding better models. Our searched models are small and perform better than prior works on three public COVID19 3D CT datasets.
EPICSurvival Endtoend Part Inferred Clustering for Survival Analysis, Featuring Prognostic Stratification Boosting ; Histopathologybased survival modelling has two major hurdles. Firstly, a wellperforming survival model has minimal clinical application if it does not contribute to the stratification of a cancer patient cohort into different risk groups, preferably driven by histologic morphologies. In the clinical setting, individuals are not given specific prognostic predictions, but are rather predicted to lie within a risk group which has a general survival trend. Thus, It is imperative that a survival model produces wellstratified risk groups. Secondly, until now, survival modelling was done in a twostage approach encoding and aggregation. The massive amount of pixels in digitized whole slide images were never utilized to their fullest extent due to technological constraints on data processing, forcing decoupled learning. EPICSurvival bridges encoding and aggregation into an endtoend survival modelling approach, while introducing stratification boosting to encourage the model to not only optimize ranking, but also to discriminate between risk groups. In this study we show that EPICSurvival performs better than other approaches in modelling intrahepatic cholangiocarcinoma, a historically difficult cancer to model. Further, we show that stratification boosting improves further improves model performance, resulting in a concordanceindex of 0.880 on a heldout test set. Finally, we were able to identify specific histologic differences, not commonly sought out in ICC, between low and high risk groups.
A microstructural model of tendon failure ; Collagen fibrils are the most important structural component of tendons. Their crimped structure and parallel arrangement within the tendon lead to a distinctive nonlinear stressstrain curve when a tendon is stretched. Microstructural models can be used to relate microscale collagen fibril mechanics to macroscale tendon mechanics, allowing us to identify the mechanisms behind each feature present in the stressstrain curve. Most models in the literature focus on the elastic behaviour of the tendon, and there are few which model beyond the elastic limit without introducing phenomenological parameters. We develop a model, built upon a collagen recruitment approach, that only contains microstructural parameters. We split the stress in the fibrils into elastic and plastic parts, and assume that the fibril yield stretch and rupture stretch are each described by a distribution function, rather than being singlevalued. By changing the shapes of the distributions and their regions of overlap, we can produce macroscale tendon stressstrain curves that generate the full range of features observed experimentally, including those that could not be explained using existing models. These features include second linear regions occurring after the tendon has yielded, and steplike failure behaviour present after the stress has peaked. When we compare with an existing model, we find that our model reduces the average root mean squared error from 4.15MPa to 1.61MPa, and the resulting parameter values are closer to those found experimentally. Since our model contains only parameters that have a direct physical interpretation, it can be used to predict how processes such as ageing, disease, and injury affect the mechanical behaviour of tendons, provided we can quantify the effects of these processes on the microstructure.
Robust Blackbox Watermarking for Deep NeuralNetwork using Inverse Document Frequency ; Deep learning techniques are one of the most significant elements of any Artificial Intelligence AI services. Recently, these Machine Learning ML methods, such as Deep Neural Networks DNNs, presented exceptional achievement in implementing humanlevel capabilities for various predicaments, such as Natural Processing Language NLP, voice recognition, and image processing, etc. Training these models are expensive in terms of computational power and the existence of enough labelled data. Thus, MLbased models such as DNNs establish genuine business value and intellectual property IP for their owners. Therefore the trained models need to be protected from any adversary attacks such as illegal redistribution, reproducing, and derivation. Watermarking can be considered as an effective technique for securing a DNN model. However, so far, most of the watermarking algorithm focuses on watermarking the DNN by adding noise to an image. To this end, we propose a framework for watermarking a DNN model designed for a textual domain. The watermark generation scheme provides a secure watermarking method by combining Term Frequency TF and Inverse Document Frequency IDF of a particular word. The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward by sending the watermarked document to the trained model. The experimental results show that watermarked models have the same accuracy as the original ones. The proposed framework accurately verifies the ownership of all surrogate models without impairing the performance. The proposed algorithm is robust against wellknown attacks such as parameter pruning and brute force attack.
Frameindependent vectorcloud neural network for nonlocal constitutive modeling on arbitrary grids ; Constitutive models are widely used for modeling complex systems in science and engineering, where firstprinciplebased, wellresolved simulations are often prohibitively expensive. For example, in fluid dynamics, constitutive models are required to describe nonlocal, unresolved physics such as turbulence and laminarturbulent transition. However, traditional constitutive models based on partial differential equations PDEs often lack robustness and are too rigid to accommodate diverse calibration datasets. We propose a frameindependent, nonlocal constitutive model based on a vectorcloud neural network that can be learned with data. The model predicts the closure variable at a point based on the flow information in its neighborhood. Such nonlocal information is represented by a group of points, each having a feature vector attached to it, and thus the input is referred to as vector cloud. The cloud is mapped to the closure variable through a frameindependent neural network, invariant both to coordinate translation and rotation and to the ordering of points in the cloud. As such, the network can deal with any number of arbitrarily arranged grid points and thus is suitable for unstructured meshes in fluid simulations. The merits of the proposed network are demonstrated for scalar transport PDEs on a family of parameterized periodic hill geometries. The vectorcloud neural network is a promising tool not only as nonlocal constitutive models and but also as general surrogate models for PDEs on irregular domains.
Learning WordLevel Confidence For Subword EndtoEnd ASR ; We study the problem of wordlevel confidence estimation in subwordbased endtoend E2E models for automatic speech recognition ASR. Although prior works have proposed training auxiliary confidence models for ASR systems, they do not extend naturally to systems that operate on wordpieces WP as their vocabulary. In particular, ground truth WP correctness labels are needed for training confidence models, but the nonunique tokenization from word to WP causes inaccurate labels to be generated. This paper proposes and studies two confidence models of increasing complexity to solve this problem. The final model uses selfattention to directly learn wordlevel confidence without needing subword tokenization, and exploits full context features from multiple hypotheses to improve confidence accuracy. Experiments on Voice Search and longtail test sets show standard metrics e.g., NCE, AUC, RMSE improving substantially. The proposed confidence module also enables a model selection approach to combine an ondevice E2E model with a hybrid model on the server to address the rare word recognition problem for the E2E model.
Stable Emulation of an Entire Suite of Model Physics in a StateoftheArt GCM using a Neural Network ; There has been a lot of recent interest in developing hybrid models that couple deterministic numerical model components to statistical model components derived using machine learning techniques. One approach that we follow in this pilot study is to replace an existing computationally expensive deterministic model component with its fast machinelearningbased emulator, leading to the model speedup andor improvement. We developed a shallow neural networkbased emulator of a complete suite of atmospheric physics parameterizations in NCEP Global Forecast System GFS general circulation model GCM. The suite emulated by a single NN includes radiative transfer, cloud macro and microphysics, shallow and deep convection, boundary layer processes, gravity wave drag, land model, etc. NCEP GFS with the neural network replacing the original suite of atmospheric parameterizations produces stable and realistic medium range weather forecasts for 24 initial conditions spanning all months of 2018. It also remains stable in a yearlong AMIPlike experiment and in the run with a quadrupled horizontal resolution. We present preliminary results of parallel runs, evaluating the accuracy and speedup of the resulting hybrid GCM.
RCT Resource Constrained Training for Edge AI ; Neural networks training on edge terminals is essential for edge AI computing, which needs to be adaptive to evolving environment. Quantised models can efficiently run on edge devices, but existing training methods for these compact models are designed to run on powerful servers with abundant memory and energy budget. For example, quantisationaware training QAT method involves two copies of model parameters, which is usually beyond the capacity of onchip memory in edge devices. Data movement between offchip and onchip memory is energy demanding as well. The resource requirements are trivial for powerful servers, but critical for edge devices. To mitigate these issues, We propose Resource Constrained Training RCT. RCT only keeps a quantised model throughout the training, so that the memory requirements for model parameters in training is reduced. It adjusts perlayer bitwidth dynamically in order to save energy when a model can learn effectively with lower precision. We carry out experiments with representative models and tasks in image application and natural language processing. Experiments show that RCT saves more than 86 energy for General Matrix Multiply GEMM and saves more than 46 memory for model parameters, with limited accuracy loss. Comparing with QATbased method, RCT saves about half of energy on moving model parameters.
What's the best place for an AI conference, Vancouver or Why completing comparative questions is difficult ; Although large neural language models LMs like BERT can be finetuned to yield stateoftheart results on many NLP tasks, it is often unclear what these models actually learn. Here we study using such LMs to fill in entities in humanauthored comparative questions, like Which country is older, India or '' i.e., we study the ability of neural LMs to ask not answer reasonable questions. We show that accuracy in this fillintheblank task is wellcorrelated with human judgements of whether a question is reasonable, and that these models can be trained to achieve nearly humanlevel performance in completing comparative questions in three different subdomains. However, analysis shows that what they learn fails to model any sort of broad notion of which entities are semantically comparable or similar instead the trained models are very domainspecific, and performance is highly correlated with cooccurrences between specific entities observed in the training set. This is true both for models that are pretrained on general text corpora, as well as models trained on a large corpus of comparison questions. Our study thus reinforces recent results on the difficulty of making claims about a deep model's world knowledge or linguistic competence based on performance on specific benchmark problems. We make our evaluation datasets publicly available to foster future research on complex understanding and reasoning in such models at standards of human interaction.
Conductancebased Dynamic Causal Modeling A mathematical review of its application to crosspower spectral densities ; Dynamic Causal Modeling DCM is a Bayesian framework for inferring on hidden latent neuronal states, based on measurements of brain activity. Since its introduction in 2003 for functional magnetic resonance imaging data, DCM has been extended to electrophysiological data, and several variants have been developed. Their biophysically motivated formulations make these models promising candidates for providing a mechanistic understanding of human brain dynamics, both in health and disease. However, due to their complexity and reliance on concepts from several fields, fully understanding the mathematical and conceptual basis behind certain variants of DCM can be challenging. At the same time, a solid theoretical knowledge of the models is crucial to avoid pitfalls in the application of these models and interpretation of their results. In this paper, we focus on one of the most advanced formulations of DCM, i.e. conductancebased DCM for crossspectral densities, whose components are described across multiple technical papers. The aim of the present article is to provide an accessible exposition of the mathematical background, together with an illustration of the model's behavior. To this end, we include stepbystep derivations of the model equations, point to important aspects in the software implementation of those models, and use simulations to provide an intuitive understanding of the type of responses that can be generated and the role that specific parameters play in the model. Furthermore, all code utilized for our simulations is made publicly available alongside the manuscript to allow readers an easy handson experience with conductancebased DCM.
CARRNN A Continuous Autoregressive Recurrent Neural Network for Deep Representation Learning from Sporadic Temporal Data ; Learning temporal patterns from multivariate longitudinal data is challenging especially in cases when data is sporadic, as often seen in, e.g., healthcare applications where the data can suffer from irregularity and asynchronicity as the time between consecutive data points can vary across features and samples, hindering the application of existing deep learning models that are constructed for complete, evenly spaced data with fixed sequence lengths. In this paper, a novel deep learningbased model is developed for modeling multiple temporal features in sporadic data using an integrated deep learning architecture based on a recurrent neural network RNN unit and a continuoustime autoregressive CAR model. The proposed model, called CARRNN, uses a generalized discretetime autoregressive model that is trainable endtoend using neural networks modulated by time lags to describe the changes caused by the irregularity and asynchronicity. It is applied to multivariate timeseries regression tasks using data provided for Alzheimer's disease progression modeling and intensive care unit ICU mortality rate prediction, where the proposed model based on a gated recurrent unit GRU achieves the lowest prediction errors among the proposed RNNbased models and stateoftheart methods using GRUs and long shortterm memory LSTM networks in their architecture.
Reward Optimization for Neural Machine Translation with Learned Metrics ; Neural machine translation NMT models are conventionally trained with tokenlevel negative loglikelihood NLL, which does not guarantee that the generated translations will be optimized for a selected sequencelevel evaluation metric. Multiple approaches are proposed to train NMT with BLEU as the reward, in order to directly improve the metric. However, it was reported that the gain in BLEU does not translate to real quality improvement, limiting the application in industry. Recently, it became clear to the community that BLEU has a low correlation with human judgment when dealing with stateoftheart models. This leads to the emerging of modelbased evaluation metrics. These new metrics are shown to have a much higher human correlation. In this paper, we investigate whether it is beneficial to optimize NMT models with the stateoftheart modelbased metric, BLEURT. We propose a contrastivemargin loss for fast and stable reward optimization suitable for large NMT models. In experiments, we perform automatic and human evaluations to compare models trained with smoothed BLEU and BLEURT to the baseline models. Results show that the reward optimization with BLEURT is able to increase the metric scores by a large margin, in contrast to limited gain when training with smoothed BLEU. The human evaluation shows that models trained with BLEURT improve adequacy and coverage of translations. Code is available via httpsgithub.comnaveraiMetricMT.
Downfolding the SuSchriefferHeeger model ; Chargedensity waves are responsible for symmetrybreaking displacements of atoms and concomitant changes in the electronic structure. Linear response theories, in particular densityfunctional perturbation theory, provide a way to study the effect of displacements on both the total energy and the electronic structure based on a single ab initio calculation. In downfolding approaches, the electronic system is reduced to a smaller number of bands, allowing for the incorporation of additional correlation and environmental effects on these bands. However, the physical contents of this downfolded model and its potential limitations are not always obvious. Here, we study the potentialenergy landscape and electronic structure of the SuSchriefferHeeger SSH model, where all relevant quantities can be evaluated analytically. We compare the exact results at arbitrary displacement with diagrammatic perturbation theory both in the full model and in a downfolded effective singleband model, which gives an instructive insight into the properties of downfolding. An exact reconstruction of the potentialenergy landscape is possible in a downfolded model, which requires a dynamical electronbiphonon interaction. The dispersion of the bands upon atomic displacement is also found correctly, where the downfolded model by construction only captures spectral weight in the target space. In the SSH model, the electronphonon coupling mechanism involves exclusively hybridization between the low and highenergy bands and this limits the computational efficiency gain of downfolded models.
Stochastic Recurrent Neural Network for Multistep Time Series Forecasting ; Time series forecasting based on deep architectures has been gaining popularity in recent years due to their ability to model complex nonlinear temporal dynamics. The recurrent neural network is one such model capable of handling variablelength input and output. In this paper, we leverage recent advances in deep generative models and the concept of state space models to propose a stochastic adaptation of the recurrent neural network for multistepahead time series forecasting, which is trained with stochastic gradient variational Bayes. In our model design, the transition function of the recurrent neural network, which determines the evolution of the hidden states, is stochastic rather than deterministic as in a regular recurrent neural network; this is achieved by incorporating a latent random variable into the transition process which captures the stochasticity of the temporal dynamics. Our model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows our model to be easily integrated into any deep architecture for sequential modelling. We test our model on a wide range of datasets from finance to healthcare; results show that the stochastic recurrent neural network consistently outperforms its deterministic counterpart.
Fitting fQ,T gravity models with a CDM limit using Hz and Pantheon data ; We proposed five fQ,T models, which are an extension of symmetric teleparallel gravity, where Q is the nonmetricity and T is the trace of the stressenergy tensor. By taking specific values of their parameters, these models have a LambdaCDM limit. Using cosmic chronometers and supernovae Ia data, we found that our models are consistent with LambdaCDM at a 95 confidence level. To see whether one of these models can challenge LambdaCDM at a background perspective, we computed the Bayesian evidence for them and LambdaCDM. According to it, the concordance model is preferred over four of them, showing a weak preference against fQ,T QGN bT and fQ,T Q2LambdaGN bT, a substantial preference against fQ,T Q2 H02 c Q6H02n1GN bT , and a strong preference against fQ,T Q2H02cQ6H02n1 2LambdaGN bT. Interestingly, a model includying a T2 dependence fQ,T Q2LambdaGN 16pi2 GN b120 H02 T2 showed a substantial preference against LambdaCDM. Therefore, we encourage further analyses of this model to test its viability outside the background perspective.
A 1DCNN Based Deep Learning Technique for Sleep Apnea Detection in IoT Sensors ; Internet of Things IoT enabled wearable sensors for health monitoring are widely used to reduce the cost of personal healthcare and improve quality of life. The sleep apneahypopnea syndrome, characterized by the abnormal reduction or pause in breathing, greatly affects the quality of sleep of an individual. This paper introduces a novel method for apnea detection pause in breathing from electrocardiogram ECG signals obtained from wearable devices. The novelty stems from the high resolution of apnea detection on a secondbysecond basis, and this is achieved using a 1dimensional convolutional neural network for feature extraction and detection of sleep apnea events. The proposed method exhibits an accuracy of 99.56 and a sensitivity of 96.05. This model outperforms several lower resolution stateoftheart apnea detection methods. The complexity of the proposed model is analyzed. We also analyze the feasibility of model pruning and binarization to reduce the resource requirements on a wearable IoT device. The pruned model with 80 sparsity exhibited an accuracy of 97.34 and a sensitivity of 86.48. The binarized model exhibited an accuracy of 75.59 and sensitivity of 63.23. The performance of low complexity patientspecific models derived from the generic model is also studied to analyze the feasibility of retraining existing models to fit patientspecific requirements. The patientspecific models on average exhibited an accuracy of 97.79 and sensitivity of 92.23. The source code for this work is made publicly available.
Graph Theory for Metro Traffic Modelling ; A unifying graph theoretic framework for the modelling of metro transportation networks is proposed. This is achieved by first introducing a basic graph framework for the modelling of the London underground system from a diffusion law point of view. This forms a basis for the analysis of both station importance and their vulnerability, whereby the concept of graph vertex centrality plays a key role. We next explore kedge augmentation of a graph topology, and illustrate its usefulness both for improving the network robustness and as a planning tool. Upon establishing the graph theoretic attributes of the underlying graph topology, we proceed to introduce models for processing data on such a metro graph. Commuter movement is shown to obey the Fick's law of diffusion, where the graph Laplacian provides an analytical model for the diffusion process of commuter population dynamics. Finally, we also explore the application of modern deep learning models, such as graph neural networks and hypergraph neural networks, as general purpose models for the modelling and forecasting of underground data, especially in the context of the morning and evening rush hours. Comprehensive simulations including the passenger in and outflows during the morning rush hour in London demonstrates the advantages of the graph models in metro planning and traffic management, a formal mathematical approach with wide economic implications.
Bonded discrete element simulations of sea ice with nonlocal failure Applications to Nares Strait ; The discrete element method DEM can provide detailed descriptions of sea ice dynamics that explicitly model floes and discontinuities in the ice, which can be challenging to represent accurately with current models. However, floescale stresses that inform lead formation in sea ice are difficult to calculate in current DEM implementations. In this paper, we use the ParticLS software library to develop a DEM that models the sea ice as a collection of discrete rigid particles that are initially bonded together using a cohesive beam model that approximates the response of an EulerBernoulli beam located between particle centroids. Ice fracture and lead formation are determined based on the value of a nonlocal Cauchy stress state around each particle and a MohrCoulomb fracture model. Therefore, large ice floes are modeled as continuous objects made up of many bonded particles that can interact with each other, deform, and fracture. We generate particle configurations by discretizing the ice in MODIS satellite imagery into polygonal floes that fill the observed ice shape and extent. The model is tested on ice advecting through an idealized channel and through Nares Strait. The results indicate that the bonded DEM model is capable of qualitatively capturing the dynamic sea ice patterns through constrictions such as ice bridges, arch kinematic features, and lead formation. In addition, we apply spatial and temporal scaling analyses to illustrate the model's ability to capture heterogeneity and intermittency in the simulated ice deformation.
3D Restoration of sedimentary terrains The GeoChron Approach ; Threedimensional restoration of complex structural models has become a recognized validation method. Bringing a sedimentary structural model back in time to various deposition stages may also help understand the geological history of a study area and follow the evolution of potential hydrocarbon source rocks, reservoirs and closures. Most current restoration methods rely on finiteelement codes which require a mesh that conforms to both horizons and faults, a difficult object to generate in complex structural settings. Some innovative approaches use implicit horizon representations to circumvent meshing requirements. In all cases, finiteelement restoration codes depend on elasticity theory which relies on mechanical parameters to characterize rock behavior during the physical unfolding process. In this paper, we present a geometric restoration method based on the mathematical theory provided by the GeoChron framework. No assumption is made on the extent of deformation, nor on the nature of terrains being restored. Equations derived from the theory developed for the GeoChron model ensure model consistency at each restored stage. As the only essential input is a GeoChron model, this restoration technique does not require any specialist knowledge and can be included in any existing structural modelbuilding workflow as a standard validation tool. A model can quickly be restored to any desired stage without providing input mechanical parameters for each layer nor defining boundary conditions, enabling geologists to iterate on the structural model and refine their interpretations until they are satisfied with both input and restored models.
MultiObjective CFDDriven Development of Coupled Turbulence Closure Models ; This paper introduces two novel concepts in datadriven turbulence modeling that enable the simultaneous development of multiple closure models and the training towards multiple objectives. The concepts extend the evolutionary framework by Weatheritt and Sandberg 2016, which derives interpretable and implementationready expressions from highfidelity simulation data. By assigning a shared fitness value to the evolved closure models and utilizing the CFDdriven training approach by Zhao et al. 2020, the multiexpression training concept introduced here is able to account for the coupling between the trained models, i.e. Reynolds stress anisotropy, turbulent heat flux and turbulence production correction models. As a second concept, a multiobjective optimization algorithm is applied to the framework. The extension yields a diverse set of candidate models and allows a tradeoff between the training objectives after analyzing the training results. In this study, the novel concepts are applied to a benchmark periodic hills case and a vertical natural convection flow. The predictions of mean flow quantities are improved compared to decoupled training strategies with distinct and robust improvements for strongly coupled momentum and thermal fields. The coupled training of closure models and the balancing of multiple training objectives are considered important capabilities on the path towards generalized datadriven turbulence models.
TCL Transformerbased Dynamic Graph Modelling via Contrastive Learning ; Dynamic graph modeling has recently attracted much attention due to its extensive applications in many realworld scenarios, such as recommendation systems, financial transactions, and social networks. Although many works have been proposed for dynamic graph modeling in recent years, effective and scalable models are yet to be developed. In this paper, we propose a novel graph neural network approach, called TCL, which deals with the dynamicallyevolving graph in a continuoustime fashion and enables effective dynamic node representation learning that captures both the temporal and topology information. Technically, our model contains three novel aspects. First, we generalize the vanilla Transformer to temporal graph learning scenarios and design a graphtopologyaware transformer. Secondly, on top of the proposed graph transformer, we introduce a twostream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a coattentional transformer to model interdependencies at a semantic level. Lastly, we are inspired by the recently developed contrastive learning and propose to optimize our model by maximizing mutual information MI between the predictive representations of two future interaction nodes. Benefiting from this, our dynamic representations can preserve highlevel or global semantics about interactions and thus is robust to noisy interactions. To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs. We evaluate our model on four benchmark datasets for interaction prediction and experiment results demonstrate the superiority of our model.
Killing One Bird with Two Stones Model Extraction and Attribute Inference Attacks against BERTbased APIs ; The collection and availability of big data, combined with advances in pretrained models e.g., BERT, XLNET, etc, have revolutionized the predictive performance of modern natural language processing tasks, ranging from text classification to text generation. This allows corporations to provide machine learning as a service MLaaS by encapsulating finetuned BERTbased models as APIs. However, BERTbased APIs have exhibited a series of security and privacy vulnerabilities. For example, prior work has exploited the security issues of the BERTbased APIs through the adversarial examples crafted by the extracted model. However, the privacy leakage problems of the BERTbased APIs through the extracted model have not been well studied. On the other hand, due to the high capacity of BERTbased APIs, the finetuned model is easy to be overlearned, but what kind of information can be leaked from the extracted model remains unknown. In this work, we bridge this gap by first presenting an effective model extraction attack, where the adversary can practically steal a BERTbased API the targetvictim model by only querying a limited number of queries. We further develop an effective attribute inference attack which can infer the sensitive attribute of the training data used by the BERTbased APIs. Our extensive experiments on benchmark datasets under various realistic settings validate the potential vulnerabilities of BERTbased APIs. Moreover, we demonstrate that two promising defense methods become ineffective against our attacks, which calls for more effective defense methods.
Modelling of active contraction pulses in epithelial cells using the vertex model ; Several models have been proposed to describe the dynamics of epithelial tissues undergoing morphogenetic changes driven by apical constriction pulses, which differ in where the constriction is applied, either at the perimeter or medial regions. To help discriminate between these models, using the vertex model for epithelial dynamics, we analysed the impact of where the constriction is applied on the final geometry of the active cell that is reducing its apical size. We find that medial activity, characterised by a reduction in the reference area in the vertex model, induces symmetry breaking and generates anisotropic cell shapes, while isotropic cell shapes and larger contractions occur when the reference perimeter in the model is reduced. When plasticity is included, sufficiently slow processes of medial contractile activity, compared with typical apical constriction pulses, can also achieve significant cell contraction. Finally, we apply the model to describe the active apical contractile pulses observed during cellular mitotic events within the epithelial enveloping cell layer in the developing annual killifish Austrolebias nigripinnis, being able to quantitatively describe the temporal evolution of cell shape changes when perimeter activity and area plasticity are included. A global fit of all parameters of the vertex model is provided.
Energybased time derivative damage accumulation model under uniaxial and multiaxial random loadings ; A new fatigue life prediction method using the energybased approach under uniaxial and multiaxial random loadings is proposed in this paper. One unique characteristic of the proposed method is that it uses timederivative damage accumulation model compared to the classical cyclebased damage accumulation model. Thus, damage under arbitrary random loading can be directly obtained using timedomain integration without cycle counting e.g., rainflow cycle counting method in classical fatigue analysis. First, a brief review of existing models is given focusing on their applicability to uniaxialmultiaxial, constantrandom, and high cycle fatiguelow cycle fatigue loading regimes. It is observed that most existing models are only applicable to certain loading conditions and many of them are not applicablevalidated under random loadings. Next, a timederivative damage accumulation model under uniaxial random loading is proposed. The proposed damage function is inspired by a timedomain fatigue crack growth model. The fatigue life is obtained by integrating the damage function following random energy loading histories. Following this, an equivalent energy concept for general multiaxial loading conditions is used to convert the random multiaxial loading to an equivalent random uniaxial loading, where the timederivative damage model can be used. Finally, the proposed model is validated with extensive experimental data from open literature and inhouse testing data under various constant and random spectrum loadings. Conclusions and future work are suggested based on the findings from this study.
Structure of Gibbs measures for planar FKpercolation and Potts models ; We prove that all Gibbs measures of the qstate Potts model on mathbbZ2 are linear combinations of the extremal measures obtained as thermodynamic limits under free or monochromatic boundary conditions. In particular all Gibbs measures are invariant under translations. This statement is new at points of firstorder phase transition, that is at TTcq when q4. In this case the structure of Gibbs measures is the most complex in the sense that there exist q1 distinct extremal measures. Most of the work is devoted to the FKpercolation model on mathbbZ2 with qgeq 1, where we prove that every Gibbs measure is a linear combination of the free and wired ones. The arguments are nonquantitative and follow the spirit of the seminal works of Aizenman and Higuchi, which established the Gibbs structure for the twodimensional Ising model. Infiniterange dependencies in FKpercolation i.e., a weaker spatial Markov property pose serious additional difficulties compared to the case of the Ising model. For example, it is not automatic, albeit true, that thermodynamic limits are Gibbs. The result for the Potts model is then derived using the EdwardsSokal coupling and autoduality. The latter ingredient is necessary since applying the EdwardsSokal procedure to a Gibbs measure for the Potts model does not automatically produce a Gibbs measure for FKpercolation. Finally, the proof is generic enough to adapt to the FKpercolation and Potts models on the triangular and hexagonal lattices and to the loop On model in the range of parameters for which its spin representation is positively associated.
Capabilities of Deep Learning Models on Learning Physical Relationships Case of RainfallRunoff Modeling with LSTM ; This study investigates the relationships which deep learning methods can identify between the input and output data. As a case study, rainfallrunoff modeling in a snowdominated watershed by means of a long and shortterm memory LSTM network is selected. Daily precipitation and mean air temperature were used as model input to estimate daily flow discharge. After model training and verification, two experimental simulations were conducted with hypothetical inputs instead of observed meteorological data to clarify the response of the trained model to the inputs. The first numerical experiment showed that even without input precipitation, the trained model generated flow discharge, particularly winter low flow and high flow during the snowmelting period. The effects of warmer and colder conditions on the flow discharge were also replicated by the trained model without precipitation. Additionally, the model reflected only 1739 of the total precipitation mass during the snow accumulation period in the total annual flow discharge, revealing a strong lack of water mass conservation. The results of this study indicated that a deep learning method may not properly learn the explicit physical relationships between input and target variables, although they are still capable of maintaining strong goodnessoffit results.
LoRA LowRank Adaptation of Large Language Models ; An important paradigm of natural language processing consists of largescale pretraining on general domain data and adaptation to particular tasks or domains. As we pretrain larger models, full finetuning, which retrains all model parameters, becomes less feasible. Using GPT3 175B as an example deploying independent instances of finetuned models, each with 175B parameters, is prohibitively expensive. We propose LowRank Adaptation, or LoRA, which freezes the pretrained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT3 175B finetuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs onpar or better than finetuning in model quality on RoBERTa, DeBERTa, GPT2, and GPT3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rankdeficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT2 at httpsgithub.commicrosoftLoRA.
RDrop Regularized Dropout for Neural Networks ; Dropout is a powerful and widely used technique to regularize the training of deep neural networks. In this paper, we introduce a simple regularization strategy upon dropout in model training, namely RDrop, which forces the output distributions of different sub models generated by dropout to be consistent with each other. Specifically, for each training sample, RDrop minimizes the bidirectional KLdivergence between the output distributions of two sub models sampled by dropout. Theoretical analysis reveals that RDrop reduces the freedom of the model parameters and complements dropout. Experiments on bf5 widely used deep learning tasks bf18 datasets in total, including neural machine translation, abstractive summarization, language understanding, language modeling, and image classification, show that RDrop is universally effective. In particular, it yields substantial improvements when applied to finetune largescale pretrained models, e.g., ViT, RoBERTalarge, and BART, and achieves stateoftheart SOTA performances with the vanilla Transformer model on WMT14 EnglishtoGerman translation bf30.91 BLEU and WMT14 EnglishtoFrench translation bf43.95 BLEU, even surpassing models trained with extra largescale data and expertdesigned advanced variants of Transformer models. Our code is available at GitHuburlhttpsgithub.comdropregRDrop.
A comparison of Galactic electron density models using PyGEDM ; Galactic electron density distribution models are crucial tools for estimating the impact of the ionised interstellar medium on the impulsive signals from radio pulsars and fast radio bursts. The two prevailing Galactic electron density models are YMW16 Yao et al., 2017 and NE2001 Cordes Lazio, 2002. Here, we introduce a software package PyGEDM which provides a unified application programming interface API for these models and the YT20 Yamasaki Totani, 2020 model of the Galactic halo. We use PyGEDM to compute allsky maps of Galactic dispersion measure DM for YMW16 and NE2001, and compare the largescale differences between the two. In general, YMW16 predicts higher DM values toward the Galactic anticentre. YMW16 predicts higher DMs at low Galactic latitudes, but NE2001 predicts higher DMs in most other directions. We identify lines of sight for which the models are most discrepant, using pulsars with independent distance measurements. YMW16 performs better on average than NE2001, but both models show significant outliers. We suggest that future campaigns to determine pulsar distances should focus on targets where the models show large discrepancies, so future models can use those measurements to better estimate distances along those line of sight. We also suggest that the Galactic halo should be considered as a component in future GEDMs, to avoid overestimating the Galactic DM contribution for extragalactic sources such as FRBs.
SuperResolution of NearSurface Temperature Utilizing Physical Quantities for RealTime Prediction of Urban Micrometeorology ; The present paper proposes a superresolution SR model based on a convolutional neural network and applies it to the nearsurface temperature in urban areas. The SR model incorporates a skip connection, a channel attention mechanism, and separated feature extractors for the inputs of temperature, building height, downward shortwave radiation, and horizontal velocity. We train the SR model with sets of lowresolution LR and highresolution HR images from buildingresolving largeeddy simulations LESs in a city, where the horizontal resolutions of LR and HR are 20 and 5 m, respectively. The generalization capability of the SR model is confirmed with LESs in another city. The estimated HR temperature fields are more accurate than those of the bicubic interpolation and image SR model that takes only the temperature as its input. Except for the temperature input, the building height is the most important to reconstruct the HR temperature and enables the SR model to reduce errors in temperature near building boundaries. The SR model considers the appropriate boundary for each building from its height information. The analysis of attention weights indicates that the importance of the building height increases as the downward shortwave radiation becomes larger. The contrast between sun and shade is strengthened with the increase in solar radiation, which may affect the temperature distribution. The short inference time suggests the potential of the proposed SR model to facilitate a realtime HR prediction in metropolitan areas by combining it with an LR buildingresolving LES model.
Revisiting Adversarial Robustness Distillation Robust Soft Labels Make Student Better ; Adversarial training is one effective approach for training robust deep neural networks against adversarial attacks. While being able to bring reliable robustness, adversarial training AT methods in general favor high capacity models, i.e., the larger the model the better the robustness. This tends to limit their effectiveness on small models, which are more preferable in scenarios where storage or computing resources are very limited e.g., mobile devices. In this paper, we leverage the concept of knowledge distillation to improve the robustness of small models by distilling from adversarially trained large models. We first revisit several stateoftheart AT methods from a distillation perspective and identify one common technique that can lead to improved robustness the use of robust soft labels predictions of a robust model. Following this observation, we propose a novel adversarial robustness distillation method called Robust Soft Label Adversarial Distillation RSLAD to train robust small student models. RSLAD fully exploits the robust soft labels produced by a robust adversariallytrained large teacher model to guide the student's learning on both natural and adversarial examples in all loss terms. We empirically demonstrate the effectiveness of our RSLAD approach over existing adversarial training and distillation methods in improving the robustness of small models against stateoftheart attacks including the AutoAttack. We also provide a set of understandings on our RSLAD and the importance of robust soft labels for adversarial robustness distillation.
Interpreting Face Inference Models using Hierarchical Network Dissection ; This paper presents Hierarchical Network Dissection, a general pipeline to interpret the internal representation of facecentric inference models. Using a probabilistic formulation, our pipeline pairs units of the model with concepts in our Face Dictionary, a collection of facial concepts with corresponding sample images. Our pipeline is inspired by Network Dissection, a popular interpretability model for objectcentric and scenecentric models. However, our formulation allows to deal with two important challenges of facecentric models that Network Dissection cannot address 1 spacial overlap of concepts there are different facial concepts that simultaneously occur in the same region of the image, like nose facial part and pointy nose facial attribute; and 2 global concepts there are units with affinity to concepts that do not refer to specific locations of the face e.g. apparent age. We use Hierarchical Network Dissection to dissect different facecentric inference models trained on widelyused facial datasets. The results show models trained for different tasks learned different internal representations. Furthermore, the interpretability results can reveal some biases in the training data and some interesting characteristics of the facecentric inference tasks. Finally, we conduct controlled experiments on biased data to showcase the potential of Hierarchical Network Dissection for bias discovery. The results illustrate how Hierarchical Network Dissection can be used to discover and quantify bias in the training data that is also encoded in the model.
Selection of inverse gamma and halft priors for hierarchical models sensitivity and recommendations ; While the importance of prior selection is well understood, establishing guidelines for selecting priors in hierarchical models has remained an active, and sometimes contentious, area of Bayesian methodology research. Choices of hyperparameters for individual families of priors are often discussed in the literature, but rarely are different families of priors compared under similar models and hyperparameters. Using simulated data, we evaluate the performance of inverse gamma and halft priors for estimating the standard deviation of random effects in three hierarchical models the 8schools model, a random intercepts longitudinal model, and a simple multiple outcomes model. We compare the performance of the two prior families using a range of prior hyperparameters, some of which have been suggested in the literature, and others that allow for a direct comparison of pairs of halft and inversegamma priors. Estimation of very small values of the random effect standard deviation led to convergence issues especially for the halft priors. For most settings, we found that the posterior distribution of the standard deviation had smaller bias under halft priors than under their inversegamma counterparts. Inverse gamma priors generally gave similar coverage but had smaller interval lengths than their halft prior counterparts. Our results for these two prior families will inform prior specification for hierarchical models, allowing practitioners to better align their priors with their respective models and goals.
Statistical methods for Mendelian models with multiple genes and cancers ; Risk evaluation to identify individuals who are at greater risk of cancer as a result of heritable pathogenic variants is a valuable component of individualized clinical management. Using principles of Mendelian genetics, Bayesian probability theory, and variantspecific knowledge, Mendelian models derive the probability of carrying a pathogenic variant and developing cancer in the future, based on family history. Existing Mendelian models are widely employed, but are generally limited to specific genes and syndromes. However, the upsurge of multigene panel germline testing has spurred the discovery of many new genecancer associations that are not presently accounted for in these models. We have developed PanelPRO, a flexible, efficient Mendelian risk prediction framework that can incorporate an arbitrary number of genes and cancers, overcoming the computational challenges that arise because of the increased model complexity. We implement an elevengene, elevencancer model, the largest Mendelian model created thus far, based on this framework. Using simulations and a clinical cohort with germline panel testing data, we evaluate model performance, validate the reversecompatibility of our approach with existing Mendelian models, and illustrate its usage. Our implementation is freely available for research use in the PanelPRO R package.
How much pretraining data do language models need to learn syntax ; Transformersbased pretrained language models achieve outstanding results in many wellknown NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impact of pretraining data size on the knowledge of the models. We explore this impact on the syntactic capabilities of RoBERTa, using models trained on incremental sizes of raw text data. First, we use syntactic structural probes to determine whether models pretrained on more data encode a higher amount of syntactic information. Second, we perform a targeted syntactic evaluation to analyze the impact of pretraining data size on the syntactic generalization performance of the models. Third, we compare the performance of the different models on three downstream applications partofspeech tagging, dependency parsing and paraphrase identification. We complement our study with an analysis of the costbenefit tradeoff of training such models. Our experiments show that while models pretrained on more data encode more syntactic knowledge and perform better on downstream applications, they do not always offer a better performance across the different syntactic phenomena and come at a higher financial and environmental cost.
Biomedical and Clinical Language Models for Spanish On the Benefits of DomainSpecific Pretraining in a MidResource Scenario ; This work presents biomedical and clinical language models for Spanish by experimenting with different pretraining choices, such as masking at word and subword level, varying the vocabulary size and testing with domain data, looking for better language representations. Interestingly, in the absence of enough clinical data to train a model from scratch, we applied mixeddomain pretraining and crossdomain transfer approaches to generate a performant bioclinical model suitable for realworld clinical data. We evaluated our models on Named Entity Recognition NER tasks for biomedical documents and challenging hospital discharge reports. When compared against the competitive mBERT and BETO models, we outperform them in all NER tasks by a significant margin. Finally, we studied the impact of the model's vocabulary on the NER performances by offering an interesting vocabularycentric analysis. The results confirm that domainspecific pretraining is fundamental to achieving higher performances in downstream NER tasks, even within a midresource scenario. To the best of our knowledge, we provide the first biomedical and clinical transformerbased pretrained language models for Spanish, intending to boost native Spanish NLP applications in biomedicine. Our best models are freely available in the HuggingFace hub httpshuggingface.coBSCTeMU.
Variational LatentState GPT for SemiSupervised TaskOriented Dialog Systems ; Recently, two approaches, finetuning large pretrained language models and variational training, have attracted significant interests, separately, for semisupervised endtoend taskoriented dialog TOD systems. In this paper, we propose Variational LatentState GPT model VLSGPT, which is the first to combine the strengths of the two approaches. Among many options of models, we propose the generative model and the inference model for variational learning of the endtoend TOD system, both as autoregressive language models based on GPT2, which can be further trained over a mix of labeled and unlabeled dialog data in a semisupervised manner. Variational training of VLSGPT is both statistically and computationally more challenging than previous variational learning works for sequential latent variable models, which use turnlevel firstorder Markovian. The inference model in VLSGPT is nonMarkovian due to the use of the Transformer architecture. In this work, we establish Recursive Monte Carlo Approximation RMCA to the variational objective with nonMarkovian inference model and prove its unbiasedness. Further, we develop the computational strategy of samplingthenforwardcomputation to realize RMCA, which successfully overcomes the memory explosion issue of using GPT in variational learning and speeds up training. Semisupervised TOD experiments are conducted on two benchmark multidomain datasets of different languages MultiWOZ2.1 and CrossWOZ. VLSGPT is shown to significantly outperform both supervisedonly and semisupervised selftraining baselines.
DataDriven Modeling of Coarse Mesh Turbulence for Reactor Transient Analysis Using Convolutional Recurrent Neural Networks ; Advanced nuclear reactors often exhibit complex thermalfluid phenomena during transients. To accurately capture such phenomena, a coarsemesh threedimensional 3D modeling capability is desired for modern nuclearsystem code. In the coarsemesh 3D modeling of advancedreactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved finescale turbulence. In this paper, we propose a datadriven coarsemesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodiumcooled fast reactor. The model has a coarsemesh setup to ensure computational efficiency, while it is trained by finemesh computational fluid dynamics CFD data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a longshorttermmemory network, is developed that can efficiently learn from the spatialtemporal CFD transient simulation results. The neural network model was trained and optimized on a lossofflow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained model's generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed datadriven approach to support the coarsemesh multidimensional modeling of advanced reactors.
KroneckerBERT Learning Kronecker Decomposition for Pretrained Language Models via Knowledge Distillation ; The development of overparameterized pretrained language models has made a significant contribution toward the success of natural language processing. While overparameterization of these models is the key to their generalization power, it makes them unsuitable for deployment on lowcapacity devices. We push the limits of stateoftheart Transformerbased pretrained language model compression using Kronecker decomposition. We use this decomposition for compression of the embedding layer, all linear mappings in the multihead attention, and the feedforward network modules in the Transformer layer. We perform intermediatelayer knowledge distillation using the uncompressed model as the teacher to improve the performance of the compressed model. We present our KroneckerBERT, a compressed version of the BERTBASE model obtained using this framework. We evaluate the performance of KroneckerBERT on wellknown NLP benchmarks and show that for a high compression factor of 19 5 of the size of the BERTBASE model, our KroneckerBERT outperforms stateoftheart compression methods on the GLUE. Our experiments indicate that the proposed model has promising outofdistribution robustness and is superior to the stateoftheart compression methods on SQuAD.
Modeling the dynamic variability of subrelativistic outer radiation belt electron fluxes using machine learning ; We present a set of neural network models that reproduce the dynamics of electron fluxes in the range of 50 keV sim 1 MeV in the outer radiation belt. The Outer Radiation belt Electron Neural net model for Medium energy electronsORIENTM uses only solar wind conditions and geomagnetic indices as input. The models are trained on electron flux data from the Magnetic Electron Ion Spectrometer MagEIS instrument onboard Van Allen Probes, and they can reproduce the dynamic variations of electron fluxes in different energy channels. The model results show high coefficient of determination R2 sim 0.780.92 on the test dataset, an outofsample 30day period from February 25 to March 25 in 2017, when a geomagnetic storm took place, as well as an outofsample one year period after March 2018. In addition, the models are able to capture electron dynamics such as intensifications, decays, dropouts, and the Magnetic Local Time MLT dependence of the lower energy sim 100 keV electron fluxes during storms. The models have reliable prediction capability and can be used for a wide range of space weather applications. The general framework of building our model is not limited to radiation belt fluxes and could be used to build machine learning models for a variety of other plasma parameters in the Earth's magnetosphere.
Efficient Estimation in Tensor Ising Models ; The tensor Ising model is a discrete exponential family used for modeling binary data on networks with not just pairwise, but higherorder dependencies. A particularly important class of tensor Ising models are the tensor CurieWeiss models, where all tuples of nodes of a particular order interact with the same intensity. The maximum likelihood estimator MLE is not explicit in this model, due to the presence of an intractable normalizing constant in the likelihood, and a computationally efficient alternative is to use the maximum pseudolikelihood estimator MPLE. In this paper, we show that the MPLE is in fact as efficient as the MLE in the Bahadur sense in the 2spin model, and for all values of the null parameter above log 2 in higherorder tensor models. Even if the null parameter happens to lie within the very small window between the threshold and log 2, they are equally efficient unless the alternative parameter is large. Therefore, not only is the MPLE computationally preferable to the MLE, but also theoretically as efficient as the MLE over most of the parameter space. Our results extend to the more general class of ErdHosR'enyi hypergraph Ising models, under slight sparsities too.
Higher order mimetic gravity after GW170817 ; On the 17th of August 2017, the thriving discovery of gravitational waves event GW170817 and its optical counterpart GRB170817A, owing to coalescing of two neutron starts divulged a very small amount of difference around cal O1016 between traveling speed of light and the velocity of gravitational waves CT. This small deviation can be used as a strong constraint on modified gravity models. We concentrate on the HigherOrder expansion of Mimetic Gravity HOMimG model to specify the parametric space of three parameters of our model a, b, and c utilizing the observational constraint from GW170817GRB170817A on CT, besides two theoretical constraints on CT2 and Cs2 due to assurance of the stability of the model and subluminal promulgation of the scalar and tensor perturbations. Thereafter, we increase the accuracy of the parametric space with the aid of imposing further limitation of gamma parameter related to the age of the Universe. In pursuance of determining the other parameter of the model lambda, the potential of the model is specified, and another observational bound related to the Equation of State EoS parameter of dark energy is taken into account. In consequence, we attain a viable HOMimG model confined to numbers of observational and theoretical constraints. At the end, regarding the concluded numerical ranges for the model parameters, and cogitating two different potential quadratic and quartic potentials to specify lambda parameter, we illustrate that the values of the model parameters are independent of the form of potential.
An Optimized Dynamic Mode Decomposition Model Robust to Multiplicative Noise ; Dynamic mode decomposition DMD is an efficient tool for decomposing spatiotemporal data into a set of lowdimensional modes, yielding the oscillation frequencies and the growth rates of physically significant modes. In this paper, we propose a novel DMD model that can be used for dynamical systems affected by multiplicative noise. We first derive a maximum a posteriori MAP estimator for the databased model decomposition of a linear dynamical system corrupted by certain multiplicative noise. Applying penalty relaxation to the MAP estimator, we obtain the proposed DMD model whose epigraphical limits are the MAP estimator and the conventional optimized DMD model. We also propose an efficient alternating gradient descent method for solving the proposed DMD model, and analyze its convergence behavior. The proposed model is demonstrated on both the synthetic data and the numerically generated onedimensional combustor data, and is shown to have superior reconstruction properties compared to stateoftheart DMD models. Considering that multiplicative noise is ubiquitous in numerous dynamical systems, the proposed DMD model opens up new possibilities for accurate databased modal decomposition.
SelfConsistent Determination of LongRange Electrostatics in Neural Network Potentials ; Machine learning has the potential to revolutionize the field of molecular simulation through the development of efficient and accurate models of interatomic interactions. In particular, neural network models can describe interactions at the level of accuracy of quantum mechanicsbased calculations, but with a fraction of the cost, enabling the simulation of large systems over long timescales with ab initio accuracy. However, implicit in the construction of neural network potentials is an assumption of locality, wherein atomic arrangements on the scale of about a nanometer are used to learn interatomic interactions. Because of this assumption, the resulting neural network models cannot describe longrange interactions that play critical roles in dielectric screening and chemical reactivity. To address this issue, we introduce the selfconsistent field neural network SCFNN model a general approach for learning the longrange response of molecular systems in neural network potentials. The SCFNN model relies on a physically meaningful separation of the interatomic interactions into short and longrange components, with a separate network to handle each component. We demonstrate the success of the SCFNN approach in modeling the dielectric properties of bulk liquid water, and show that the SCFNN model accurately predicts longrange polarization correlations and the response of water to applied electrostatic fields. Importantly, because of the separation of interactions inherent in our approach, the SCFNN model can be combined with many existing approaches for building neural network potentials. Therefore, we expect the SCFNN model to facilitate the proper description of longrange interactions in a widevariety of machine learningbased force fields.
Personalized RetrogressResilient Framework for RealWorld Medical Federated Learning ; Nowadays, deep learning methods with largescale datasets can produce clinically useful models for computeraided diagnosis. However, the privacy and ethical concerns are increasingly critical, which make it difficult to collect large quantities of data from multiple institutions. Federated Learning FL provides a promising decentralized solution to train model collaboratively by exchanging client models instead of private data. However, the server aggregation of existing FL methods is observed to degrade the model performance in realworld medical FL setting, which is termed as retrogress. To address this problem, we propose a personalized retrogressresilient framework to produce a superior personalized model for each client. Specifically, we devise a Progressive Fourier Aggregation PFA at the server to achieve more stable and effective global knowledge gathering by integrating client models from lowfrequency to highfrequency gradually. Moreover, with an introduced deputy model to receive the aggregated server model, we design a DeputyEnhanced Transfer DET strategy at the client and conduct three steps of RecoverExchangeSublimate to ameliorate the personalized local model by transferring the global knowledge smoothly. Extensive experiments on realworld dermoscopic FL dataset prove that our personalized retrogressresilient framework outperforms stateoftheart FL methods, as well as the generalization on an outofdistribution cohort. The code and dataset are available at httpsgithub.comCityUAIMGroupPRRFL.
FairMask Better Fairness via Modelbased Rebalancing of Protected Attributes ; Context Machine learning software can generate models that inappropriately discriminate against specific protected social groups e.g., groups based on gender, ethnicity, etc. Motivated by those results, software engineering researchers have proposed many methods for mitigating those discriminatory effects. While those methods are effective in mitigating bias, few of them can provide explanations on what is the root cause of bias. Objective We aim at better detection and mitigation of algorithmic discrimination in machine learning software problems. Method Here we propose xFAIR, a modelbased extrapolation method, that is capable of both mitigating bias and explaining the cause. In our xFAIR approach, protected attributes are represented by models learned from the other independent variables and these models offer extrapolations over the space between existing examples. We then use the extrapolation models to relabel protected attributes later seen in testing data or deployment time. Our approach aims to offset the biased predictions of the classification model via rebalancing the distribution of protected attributes. Results The experiments of this paper show that, without compromising original model performance, xFAIR can achieve significantly better group and individual fairness as measured in different metrics than benchmark methods. Moreover, when compared to another instancebased rebalancing method, our modelbased approach shows faster runtime and thus better scalability. Conclusion Algorithmic decision bias can be removed via extrapolation that smooths away outlier points. As evidence for this, our proposed xFAIR is not only performancewise better measured by fairness and performance metrics than two stateoftheart fairness algorithms.
A Mathematical Model of Thyroid Disease Response to Radiotherapy ; We present a mechanistic biomathematical model of molecular radiotherapy of thyroid disease. The general model consists of a set of differential equations describing the dynamics of different populations of thyroid cells with varying degrees of damage caused by radiotherapy undamaged cells, sublethally damaged cells, doomed cells, and dead cells, as well as the dynamics of thyroglobulin and antithyroglobulin autoantibodies, which are important surrogates of treatment response. The model is presented in two flavours on the one hand, as a deterministic continuous model, which is useful to fit populational data, and on the other hand, as a stochastic Markov model, which is particularly useful to investigate tumor control probabilities and treatment individualization. The model was used to fit the response dynamics tumorthyroid volumes, thyroglobulin and antithyroglobulin autoantibodies observed in experimental studies of thyroid cancer and Graves' disease treated with I131radiotherapy. A qualitative adequate fitting of the model to the experimental data was achieved. We also used the model to investigate treatment individualization strategies for differentiated thyroid cancer, aiming to improve the tumor control probability. We found that simple individualization strategies based on the absorbed dose in the tumor and tumor radiosensitivity which are both magnitudes that can potentially be individually determined for every patient can lead to an important raise of tumor control probabilities.
To Recommend or Not A ModelBased Comparison of ItemMatching Processes ; Recommender systems are central to modern online platforms, but a popular concern is that they may be pulling society in dangerous directions e.g., towards filter bubbles. However, a challenge with measuring the effects of recommender systems is how to compare user outcomes under these systems to outcomes under a credible counterfactual world without such systems. We take a modelbased approach to this challenge, introducing a dichotomy of process models that we can compare 1 a recommender model describing a generic itemmatching process under a personalized recommender system and 2 an organic model describing a baseline counterfactual where users search for items without the mediation of any system. Our key finding is that the recommender and organic models result in dramatically different outcomes at both the individual and societal level, as supported by theorems and simulation experiments with real data. The two process models also induce different tradeoffs during inference, where standard performanceimproving techniques such as regularizationshrinkage have divergent effects. Shrinkage improves the mean squared error of matches in both settings, as expected, but at the cost of less diverse less radical items chosen in the recommender model but more diverse more radical items chosen in the organic model. These findings provide a formal language for how recommender systems may be fundamentally altering how we search for and interact with content, in a world increasingly mediated by such systems.
Using Steered Molecular Dynamic Tension for Assessing Quality of Computational Protein Structure Models ; The native structures of proteins, except for notable exceptions of intrinsically disordered proteins, in general take their most stable conformation in the physiological condition to maintain their structural framework so that their biological function can be properly carried out. Experimentally, the stability of a protein can be measured by several means, among which the pulling experiment using the atomic force microscope AFM stands as a unique method. AFM directly measures the resistance from unfolding, which can be quantified from the observed forceextension profile. It has been shown that key features observed in an AFM pulling experiment can be well reproduced by computational molecular dynamics simulations. Here, we applied computational pulling for estimating the accuracy of computational protein structure models under the hypothesis that the structural stability would positively correlated with the accuracy, i.e. the closeness to the native, of a model. We used in total 4,929 structure models for 24 target proteins from the Critical Assessment of Techniques of Structure Prediction CASP and investigated if the magnitude of the break force, i.e., the force required to rearrange the model structure, from the force profile was sufficient information for selecting nearnative models. We found that nearnative models can be successfully selected by examining their break forces suggesting that high break force indeed indicates high stability of models. On the other hand, there were also nearnative models that had relatively low peak forces. The mechanisms of the stability exhibited by the break forces were explored and discussed.
Twoloop Prediction of the Anomalous Magnetic Moment of the Muon in the TwoHiggs Doublet Model with GM2Calc 2 ; We present an extension of the GM2Calc software to calculate the muon anomalous magnetic moment amutextBSM in the TwoHiggs Doublet Model. The TwoHiggs Doublet Model is one of the simplest and most popular extensions of the Standard Model. It is one of the few single field extensions that can give large contributions to amutextBSM. It is essential to include twoloop corrections to explain the long standing discrepancy between the Standard Model prediction and the experimental measurement in the TwoHiggs Doublet Model. The new version GM2Calc 2 implements the state of the art twoloop calculation for the general, flavour violating TwoHiggs Doublet Model as well as for the flavour aligned TwoHiggs Doublet Model and the type I, II, X and Y flavour conserving variants. Input parameters can be provided in either the gauge basis or the mass basis, and we provide an easy to use SLHAlike commandline interface to specify these. Using this interface users may also select between TwoHiggs Doublet Model types and choose which contributions to apply. In addition, GM2Calc 2 also provides interfaces in C, C, Python and Mathematica, to make it easy to interface with other codes.
Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks A Summary of Results ; Given Spatial Variability Aware Neural Networks SVANNs, the goal is to investigate mathematical or computational models for comparative physical interpretation towards their transparency e.g., simulatibility, decomposability and algorithmic transparency. This problem is important due to important usecases such as reusability, debugging, and explainability to a jury in a court of law. Challenges include a large number of model parameters, vacuous bounds on generalization performance of neural networks, risk of overfitting, sensitivity to noise, etc., which all detract from the ability to interpret the models. Related work on either modelspecific or modelagnostic posthoc interpretation is limited due to a lack of consideration of physical constraints e.g., mass balance and properties e.g., second law of geography. This work investigates physical interpretation of SVANNs using novel comparative approaches based on geographically heterogeneous features. The proposed approach on featurebased physical interpretation is evaluated using a casestudy on wetland mapping. The proposed physical interpretation improves the transparency of SVANN models and the analytical results highlight the tradeoff between model transparency and model performance e.g., F1score. We also describe an interpretation based on geographically heterogeneous processes modeled as partial differential equations PDEs.
Monotonic Safety for Scalable and DataEfficient Probabilistic Safety Analysis ; Autonomous systems with machine learningbased perception can exhibit unpredictable behaviors that are difficult to quantify, let alone verify. Such behaviors are convenient to capture in probabilistic models, but probabilistic model checking of such models is difficult to scale largely due to the nondeterminism added to models as a prerequisite for provable conservatism. Statistical model checking SMC has been proposed to address the scalability issue. However it requires large amounts of data to account for the aforementioned nondeterminism, which in turn limits its scalability. This work introduces a general technique for reduction of nondeterminism based on assumptions of monotonic safety', which define a partial order between system states in terms of their probabilities of being safe. We exploit these assumptions to remove nondeterminism from controllerplant models to drastically speed up probabilistic model checking and statistical model checking while providing provably conservative estimates as long as the safety is indeed monotonic. Our experiments demonstrate modelchecking speedups of an order of magnitude while maintaining acceptable accuracy and require much less data for accurate estimates when running SMC even when monotonic safety does not perfectly hold and provable conservatism is not achieved.
ARFED AttackResistant Federated averaging based on outlier elimination ; In federated learning, each participant trains its local model with its own data and a global model is formed at a trusted server by aggregating model updates coming from these participants. Since the server has no effect and visibility on the training procedure of the participants to ensure privacy, the global model becomes vulnerable to attacks such as data poisoning and model poisoning. Although many defense algorithms have recently been proposed to address these attacks, they often make strong assumptions that do not agree with the nature of federated learning, such as assuming NonIID datasets. Moreover, they mostly lack comprehensive experimental analyses. In this work, we propose a defense algorithm called ARFED that does not make any assumptions about data distribution, update similarity of participants, or the ratio of the malicious participants. ARFED mainly considers the outlier status of participant updates for each layer of the model architecture based on the distance to the global model. Hence, the participants that do not have any outlier layer are involved in model aggregation. We have performed extensive experiments on diverse scenarios and shown that the proposed approach provides a robust defense against different attacks. To test the defense capability of the ARFED in different conditions, we considered label flipping, Byzantine, and partial knowledge attacks for both IID and NonIID settings in our experimental evaluations. Moreover, we proposed a new attack, called organized partial knowledge attack, where malicious participants use their training statistics collaboratively to define a common poisoned model. We have shown that organized partial knowledge attacks are more effective than independent attacks.
A Unified and Fast Interpretable Model for Predictive Analytics ; Predictive analytics aims to build machine learning models to predict behavior patterns and use predictions to guide decisionmaking. Predictive analytics is human involved, thus the machine learning model is preferred to be interpretable. In literature, Generalized Additive Model GAM is a standard for interpretability. However, due to the onetomany and manytoone phenomena which appear commonly in realworld scenarios, existing GAMs have limitations to serve predictive analytics in terms of both accuracy and training efficiency. In this paper, we propose FXAM Fast and eXplainable Additive Model, a unified and fast interpretable model for predictive analytics. FXAM extends GAM's modeling capability with a unified additive model for numerical, categorical, and temporal features. FXAM conducts a novel training procedure called ThreeStage Iteration TSI. TSI corresponds to learning over numerical, categorical, and temporal features respectively. Each stage learns a local optimum by fixing the parameters of other stages. We design joint learning over categorical features and partial learning over temporal features to achieve high accuracy and training efficiency. We prove that TSI is guaranteed to converge to the global optimum. We further propose a set of optimization techniques to speed up FXAM's training algorithm to meet the needs of interactive analysis. Thorough evaluations conducted on diverse data sets verify that FXAM significantly outperforms existing GAMs in terms of training speed, and modeling categorical and temporal features. In terms of interpretability, we compare FXAM with the typical posthoc approach XGBoostSHAP on two realworld scenarios, which shows the superiority of FXAM's inherent interpretability for predictive analytics.
Improving the robustness and accuracy of biomedical language models through adversarial training ; Deep transformer neural network models have improved the predictive accuracy of intelligent text processing systems in the biomedical domain. They have obtained stateoftheart performance scores on a wide variety of biomedical and clinical Natural Language Processing NLP benchmarks. However, the robustness and reliability of these models has been less explored so far. Neural NLP models can be easily fooled by adversarial samples, i.e. minor changes to input that preserve the meaning and understandability of the text but force the NLP system to make erroneous decisions. This raises serious concerns about the security and trustworthiness of biomedical NLP systems, especially when they are intended to be deployed in realworld use cases. We investigated the robustness of several transformer neural language models, i.e. BioBERT, SciBERT, BioMedRoBERTa, and BioClinicalBERT, on a wide range of biomedical and clinical text processing tasks. We implemented various adversarial attack methods to test the NLP systems in different attack scenarios. Experimental results showed that the biomedical NLP models are sensitive to adversarial samples; their performance dropped in average by 21 and 18.9 absolute percent on characterlevel and wordlevel adversarial noise, respectively. Conducting extensive adversarial training experiments, we finetuned the NLP models on a mixture of clean samples and adversarial inputs. Results showed that adversarial training is an effective defense mechanism against adversarial noise; the models robustness improved in average by 11.3 absolute percent. In addition, the models performance on clean data increased in average by 2.4 absolute present, demonstrating that adversarial training can boost generalization abilities of biomedical NLP systems.
A transformerbased model for default prediction in midcap corporate markets ; In this paper, we study midcap companies, i.e. publicly traded companies with less than US 10 billion in market capitalisation. Using a large dataset of US midcap companies observed over 30 years, we look to predict the default probability term structure over the medium term and understand which data sources i.e. fundamental, market or pricing data contribute most to the default risk. Whereas existing methods typically require that data from different time periods are first aggregated and turned into crosssectional features, we frame the problem as a multilabel timeseries classification problem. We adapt transformer models, a stateoftheart deep learning model emanating from the natural language processing domain, to the credit risk modelling setting. We also interpret the predictions of these models using attention heat maps. To optimise the model further, we present a custom loss function for multilabel classification and a novel multichannel architecture with differential training that gives the model the ability to use all input data efficiently. Our results show the proposed deep learning architecture's superior performance, resulting in a 13 improvement in AUC Area Under the receiver operating characteristic Curve over traditional models. We also demonstrate how to produce an importance ranking for the different data sources and the temporal relationships using a Shapley approach specific to these models.
On the Conservation of Turbulence Energy in Turbulence Transport Models ; Zank et al. developed models describing the transport of low frequency incompressible and nearly incompressible turbulence in inhomogeneous flows. The formalism was based on expressing the fluctuating variables in terms of the Elsassar variables and then taking moments subject to various closure hypotheses. The turbulence transport models are different according to whether the plasma beta regime is large or of order 1 or smaller. Here, we show explicitly that the two sets of turbulence transport models admit a conservation representation that resembles the wellknown WKB transport equation for Alfv'en wave energy density after introducing appropriate definitions of the pressure associated with the turbulent fluctuations. This includes introducing a distinct turbulent pressure tensor for 3D incompressible turbulence the large plasma beta limit and pressure tensors for quasi2D and slab turbulence the plasma beta order 1 or small regimes that generalize the form of the WKB pressure tensor. Various limits of the different turbulent pressure tensors are discussed. However, the analogy between the conservation form of the turbulence transport models and the WKB model is not close for multiple reasons, including that the turbulence models express fully nonlinear physical processes unlike the strictly linear WKB description. The analysis presented here serves both as a check on the validity and correctness of the turbulence transport models and provides greater transparency of the energy dissipation term and the turbulent pressure in our models, which is important for many practical applications.
Ensembling Offtheshelf Models for GAN Training ; The advent of largescale training has produced a cornucopia of powerful visual recognition models. However, generative models, such as GANs, have traditionally been trained from scratch in an unsupervised manner. Can the collective knowledge from a large bank of pretrained vision models be leveraged to improve GAN training If so, with so many models to choose from, which ones should be selected, and in what manner are they most effective We find that pretrained computer vision models can significantly improve performance when used in an ensemble of discriminators. Notably, the particular subset of selected models greatly affects performance. We propose an effective selection mechanism, by probing the linear separability between real and fake samples in pretrained model embeddings, choosing the most accurate model, and progressively adding it to the discriminator ensemble. Interestingly, our method can improve GAN training in both limited data and largescale settings. Given only 10k training samples, our FID on LSUN Cat matches the StyleGAN2 trained on 1.6M images. On the full dataset, our method improves FID by 1.5x to 2x on cat, church, and horse categories of LSUN.
Classical vertex model dualities in a family of 2D frustrated quantum antiferromagnets ; We study a general class of easyaxis spin models on a lattice of corner sharing evensided polygons with alltoall interactions within a plaquette. The low energy description corresponds to a quantum dimer model on a dual lattice of even coordination number with a multi dimer constraint. At an appropriately constructed frustrationfree RokhsarKivelson RK point, the ground state wavefunction can be exactly mapped onto a classical vertex model on the dual lattice. When the dual lattice is bipartite, the vertex models are bonded and are self dual under Wegner's duality, with the self dual point corresponding to the RK point of the original multidimer model. We argue that the self dual point is a critical point based on known exact solutions to some of the vertex models. When the dual lattice is nonbipartite, the vertex model is arrowed, and we use numerical methods to argue that there is no phase transition as a function of the vertex weights. Motivated by these wavefunction dualities, we construct two other distinct families of frustrationfree Hamiltonians whose ground states can be mapped onto these vertex models. Many of these RK Hamiltonians provably host mathbbZ2 topologically ordered phases.
Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations ; Neural ordinary differential equations NODE have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks ResNets. They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the muchrequired uncertainty modelling and robustness capabilities which are crucial for their use in several realworld applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the endtime T of the ODE solver. The proposed approach, latent time NODE LTNODE, treats T as a latent variable and apply Bayesian learning to obtain a posterior distribution over T from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As T implicitly defines the depth of a NODE, posterior distribution over T would also help in model selection in NODE. We also propose, adaptive latent time NODE ALTNODE, which allow each data point to have a distinct posterior distribution over endtimes. ALTNODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several realworld image classification data.
Streaming MultiTalker ASR with TokenLevel Serialized Output Training ; This paper proposes a tokenlevel serialized output training tSOT, a novel framework for streaming multitalker automatic speech recognition ASR. Unlike existing streaming multitalker ASR models using multiple output branches, the tSOT model has only a single output branch that generates recognition tokens e.g., words, subwords of multiple speakers in chronological order based on their emission times. A special token that indicates the change of virtual'' output channels is introduced to keep track of the overlapping utterances. Compared to the prior streaming multitalker ASR models, the tSOT model has the advantages of less inference cost and a simpler model architecture. Moreover, in our experiments with LibriSpeechMix and LibriCSS datasets, the tSOTbased transformer transducer model achieves the stateoftheart word error rates by a significant margin to the prior results. For nonoverlapping speech, the tSOT model is on par with a singletalker ASR model in terms of both accuracy and computational cost, opening the door for deploying one model for both single and multitalker scenarios.
TargetedBEHRT Deep learning for observational causal inference on longitudinal electronic health records ; Observational causal inference is useful for decision making in medicine when randomized clinical trials RCT are infeasible or non generalizable. However, traditional approaches fail to deliver unconfounded causal conclusions in practice. The rise of doubly robust nonparametric tools coupled with the growth of deep learning for capturing rich representations of multimodal data, offers a unique opportunity to develop and test such models for causal inference on comprehensive electronic health records EHR. In this paper, we investigate causal modelling of an RCTestablished null causal association the effect of antihypertensive use on incident cancer risk. We develop a dataset for our observational study and a Transformerbased model, Targeted BEHRT coupled with doubly robust estimation, we estimate average risk ratio RR. We compare our model to benchmark statistical and deep learning models for causal inference in multiple experiments on semisynthetic derivations of our dataset with various types and intensities of confounding. In order to further test the reliability of our approach, we test our model on situations of limited data. We find that our model provides more accurate estimates of RR least sum absolute error from ground truth compared to benchmarks for risk ratio estimation on highdimensional EHR across experiments. Finally, we apply our model to investigate the original case study antihypertensives' effect on cancer and demonstrate that our model generally captures the validated null association.
A MaxwellAmpere NernstPlanck Framework for Modeling Charge Dynamics ; Understanding the properties of charge dynamics is crucial to many practical applications, such as electrochemical energy devices and transmembrane ion channels. This work proposes a MaxwellAmpere NernstPlanck MANP framework for the description of charge dynamics. The MANP model with a curlfree condition on the electric displacement is shown to be energy dissipative with respect to a convex freeenergy functional, and demonstrated to be equivalent to the PoissonNernstPlanck model. By the energy dissipation law, the steady state of the MANP model reproduces the charge conserving PoissonBoltzmann PB theory, providing an alternative energy stable approach to study the PB theory. In order to achieve the curlfree condition, a companion local curlfree relaxation algorithm, which is shown to naturally preserve the discrete Gauss's law and converge robustly with linear computational complexity, is developed for the MANP model. One of the main advantages of our development is that it can efficiently deal with spacedependent permittivity instead of solving the variablecoefficient Poisson's equation. Manybody effects such as ionic steric effects and Coulomb correlations can be incorporated within the MANP framework to derive modified MANP models for problems in which the meanfield approximation fails. Numerical results on the charge dynamics with such beyond meanfield effects in inhomogeneous dielectric environments are presented to demonstrate the performance of the MANP models in the description of charge dynamics, illustrating that the proposed MANP model provides a general framework for modeling charge dynamics.
Learning physicsinformed simulation models for soft robotic manipulation A case study with dielectric elastomer actuators ; Soft actuators offer a safe, adaptable approach to tasks like gentle grasping and dexterous manipulation. Creating accurate models to control such systems however is challenging due to the complex physics of deformable materials. Accurate Finite Element Method FEM models incur prohibitive computational complexity for closedloop use. Using a differentiable simulator is an attractive alternative, but their applicability to soft actuators and deformable materials remains underexplored. This paper presents a framework that combines the advantages of both. We learn a differentiable model consisting of a material properties neural network and an analytical dynamics model of the remainder of the manipulation task. This physicsinformed model is trained using data generated from FEM, and can be used for closedloop control and inference. We evaluate our framework on a dielectric elastomer actuator DEA coinpulling task. We simulate the task of using DEA to pull a coin along a surface with frictional contact, using FEM, and evaluate the physicsinformed model for simulation, control, and inference. Our model attains 5 simulation error compared to FEM, and we use it as the basis for an MPC controller that requires fewer iterations to converge than modelfree actorcritic, PD, and heuristic policies.
Sniper Backdoor Single Client Targeted Backdoor Attack in Federated Learning ; Federated Learning FL enables collaborative training of Deep Learning DL models where the data is retained locally. Like DL, FL has severe security weaknesses that the attackers can exploit, e.g., model inversion and backdoor attacks. Model inversion attacks reconstruct the data from the training datasets, whereas backdoors misclassify only classes containing specific properties, e.g., a pixel pattern. Backdoors are prominent in FL and aim to poison every client model, while model inversion attacks can target even a single client. This paper introduces a novel technique to allow backdoor attacks to be clienttargeted, compromising a single client while the rest remain unchanged. The attack takes advantage of stateoftheart model inversion and backdoor attacks. Precisely, we leverage a Generative Adversarial Network to perform the model inversion. Afterward, we shadowtrain the FL network, in which, using a Siamese Neural Network, we can identify, target, and backdoor the victim's model. Our attack has been validated using the MNIST, FMNIST, EMNIST, and CIFAR100 datasets under different settings achieving up to 99 accuracy on both source clean and target backdoor classes and against stateoftheart defenses, e.g., Neural Cleanse, opening a novel threat model to be considered in the future.
Efficient and certified solution of parametrized oneway coupled problems through DEIMbased data projection across nonconforming interfaces ; One of the major challenges of coupled problems is to manage nonconforming meshes at the interface between two models andor domains, due to different numerical schemes or domains discretizations employed. Moreover, very often complex submodels depend on e.g., physical or geometrical parameters. Understanding how outputs of interest are affected by parameter variations thus plays a key role to gain useful insights on the problem's physics; however, expensive repeated solutions of the problem using highfidelity, fullorder models are often unaffordable. In this paper, we propose a parametric reduced order modeling ROM technique for parametrized oneway coupled problems made by a first independent model, the master model, and a second model, the slave model, that depends on the master model through Dirichlet interface conditions. We combine a reduced basis RB method, applied to each subproblems, with the discretized empirical interpolation method DEIM to efficiently interpolate or project Dirichlet data across conforming and nonconforming meshes at the domains interface, building a lowdimensional representation of the overall coupled problem. The proposed technique is then numerically verified by considering a series of test cases involving both steady and unsteady problems, and deriving aposteriori error estimates on the solution of the coupled problem in both cases. This work arises from the need to solve staggered cardiac electrophysiological models and represents the first step towards the setting of ROM techniques for the more general twoway DirichletNeumann coupled problems solved with domain decomposition substructuring methods, when interface nonconformity is involved.
Hybrid classifiers of pairwise Markov models ; The article studies segmentation problem also known as classification problem with pairwise Markov models PMMs. A PMM is a process where the observation process and underlying state sequence form a twodimensional Markov chain, it is a natural generalization of a hidden Markov model. To demonstrate the richness of the class of PMMs, we examine closer a few examples of rather different types of PMMs a model for two related Markov chains, a model that allows to model an inhomogeneous Markov chain as a homogeneous one and a semiMarkov model. The segmentation problem assumes that one of the marginal processes is observed and the other one is not, the problem is to estimate the unobserved state path given the observations. The standard state path estimators often used are the socalled Viterbi path a sequence with maximum state path probability given the observations or the pointwise maximum a posteriori PMAP path a sequence that maximizes the conditional state probability for given observations pointwise. Both these estimators have their limitations, therefore we derive formulas for calculating the socalled hybrid path estimators which interpolate between the PMAP and Viterbi path. We apply the introduced algorithms to the studied models in order to demonstrate the properties of different segmentation methods, and to illustrate large variation in behaviour of different segmentation methods in different PMMs. The studied examples show that a segmentation method should always be chosen with care by taking into account the particular model of interest.
SolidGen An Autoregressive Model for Direct Brep Synthesis ; The Boundary representation Brep format is the defacto shape representation in computeraided design CAD to model solid and sheet objects. Recent approaches to generating CAD models have focused on learning sketchandextrude modeling sequences that are executed by a solid modeling kernel in postprocess to recover a Brep. In this paper we present a new approach that enables learning from and synthesizing Breps without the need for supervision through CAD modeling sequence data. Our method SolidGen, is an autoregressive neural network that models the Brep directly by predicting the vertices, edges, and faces using Transformerbased and pointer neural networks. Key to achieving this is our Indexed Boundary Representation that references Brep vertices, edges and faces in a welldefined hierarchy to capture the geometric and topological relations suitable for use with machine learning. SolidGen can be easily conditioned on contexts e.g., class labels, images, and voxels thanks to its probabilistic modeling of the Brep distribution. We demonstrate qualitatively, quantitatively, and through perceptual evaluation by human subjects that SolidGen can produce high quality, realistic CAD models.
LocalAdaptive Face Recognition via Graphbased MetaClustering and Regularized Adaptation ; Due to the rising concern of data privacy, it's reasonable to assume the local client data can't be transferred to a centralized server, nor their associated identity label is provided. To support continuous learning and fill the lastmile quality gap, we introduce a new problem setup called LocalAdaptive Face Recognition LaFR. Leveraging the environmentspecific local data after the deployment of the initial global model, LaFR aims at getting optimal performance by training localadapted models automatically and unsupervisely, as opposed to fixing their initial global model. We achieve this by a newly proposed embedding cluster model based on Graph Convolution Network GCN, which is trained via metaoptimization procedure. Compared with previous works, our metaclustering model can generalize well in unseen local environments. With the pseudo identity labels from the clustering results, we further introduce novel regularization techniques to improve the model adaptation performance. Extensive experiments on racial and internal sensor adaptation demonstrate that our proposed solution is more effective for adapting face recognition models in each specific environment. Meanwhile, we show that LaFR can further improve the global model by a simple federated aggregation over the updated local models.
Deep Interactive Learningbased ovarian cancer segmentation of HEstained whole slide images to study morphological patterns of BRCA mutation ; Deep learning has been widely used to analyze digitized hematoxylin and eosin HEstained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixelwise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its timeconsuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and noncancer regions on gigapixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and trainingfinetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersectionoverunion of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted highgrade serous ovarian cancer patches, we attempted to train another deep learning model to predict BRCA mutation. The segmentation model and code have been released at httpsgithub.comMSKCCComputationalPathologyDMMNovary.
Dynamic simulation of aortic valve stenosis using a lumped parameter cardiovascular system model with flow regime dependent valve pressure loss characteristics ; Valvular heart diseases are growing concern in impoverished parts of the world, such as SouthernAfrica, claiming more than 31 of total deaths related to cardiovascular diseases. The ability to model the effects of regurgitant and obstructive lesions on the valve body can assist clinicians in preparing personalised treatments. In the present work, a multicompartment lumped parameter model of the human cardiovascular system is developed, with a newly proposed valve modelling approach which accounts for geometry and flow regime dependent pressure drops along with the valve cusp motion. The model is applied to study various degrees of aortic stenosis using typical human cardiovascular parameters. The results generated with the proposed model, are compared to predictions using previously published valve modelling approaches and both sets of results are compared to typical local and global physiological parameters found in literature such leftventricular systolic pressures, peak and mean aortic valve pressure drops and vena contracta velocities. The results show that the previously published valve models under predicts expected severely stenosed peak and mean transvalvular pressure drops by approximately 47 and 30 respectively, whereas the newly proposed model under predicts the peak pressure drop by 20 and over predicts mean pressure drop by 7.
Training EntireSpace Models for Targetoriented Opinion Words Extraction ; Targetoriented opinion words extraction TOWE is a subtask of aspectbased sentiment analysis ABSA. Given a sentence and an aspect term occurring in the sentence, TOWE extracts the corresponding opinion words for the aspect term. TOWE has two types of instance. In the first type, aspect terms are associated with at least one opinion word, while in the second type, aspect terms do not have corresponding opinion words. However, previous researches trained and evaluated their models with only the first type of instance, resulting in a sample selection bias problem. Specifically, TOWE models were trained with only the first type of instance, while these models would be utilized to make inference on the entire space with both the first type of instance and the second type of instance. Thus, the generalization performance will be hurt. Moreover, the performance of these models on the first type of instance cannot reflect their performance on entire space. To validate the sample selection bias problem, four popular TOWE datasets containing only aspect terms associated with at least one opinion word are extended and additionally include aspect terms without corresponding opinion words. Experimental results on these datasets show that training TOWE models on entire space will significantly improve model performance and evaluating TOWE models only on the first type of instance will overestimate model performance.
Towards DataFree Model Stealing in a Hard Label Setting ; Machine learning models deployed as a service MLaaS are susceptible to model stealing attacks, where an adversary attempts to steal the model within a restricted access framework. While existing attacks demonstrate nearperfect clonemodel performance using softmax predictions of the classification network, most of the APIs allow access to only the top1 labels. In this work, we show that it is indeed possible to steal Machine Learning models by accessing only top1 predictions Hard Label setting as well, without access to model gradients BlackBox setting or even the training dataset DataFree setting within a low query budget. We propose a novel GANbased framework that trains the student and generator in tandem to steal the model effectively while overcoming the challenge of the hard label setting by utilizing gradients of the clone network as a proxy to the victim's gradients. We propose to overcome the large query costs associated with a typical DataFree setting by utilizing publicly available potentially unrelated datasets as a weak image prior. We additionally show that even in the absence of such data, it is possible to achieve stateoftheart results within a low query budget using synthetically crafted samples. We are the first to demonstrate the scalability of Model Stealing in a restricted access setting on a 100 class dataset as well.
Open vs Closedended questions in attitudinal surveys comparing, combining, and interpreting using natural language processing ; To improve the traveling experience, researchers have been analyzing the role of attitudes in travel behavior modeling. Although most researchers use closedended surveys, the appropriate method to measure attitudes is debatable. Topic Modeling could significantly reduce the time to extract information from openended responses and eliminate subjective bias, thereby alleviating analyst concerns. Our research uses Topic Modeling to extract information from openended questions and compare its performance with closedended responses. Furthermore, some respondents might prefer answering questions using their preferred questionnaire type. So, we propose a modeling framework that allows respondents to use their preferred questionnaire type to answer the survey and enable analysts to use the modeling frameworks of their choice to predict behavior. We demonstrate this using a dataset collected from the USA that measures the intention to use Autonomous Vehicles for commute trips. Respondents were presented with alternative questionnaire versions open and closed ended. Since our objective was also to compare the performance of alternative questionnaire versions, the survey was designed to eliminate influences resulting from statements, behavioral framework, and the choice experiment. Results indicate the suitability of using Topic Modeling to extract information from openended responses; however, the models estimated using the closedended questions perform better compared to them. Besides, the proposed model performs better compared to the models used currently. Furthermore, our proposed framework will allow respondents to choose the questionnaire type to answer, which could be particularly beneficial to them when using voicebased surveys.
Deep Sequence Modeling for Anomalous ISP Traffic Prediction ; Internet traffic in the real world is susceptible to various external and internal factors which may abruptly change the normal traffic flow. Those unexpected changes are considered outliers in traffic. However, deep sequence models have been used to predict complex IP traffic, but their comparative performance for anomalous traffic has not been studied extensively. In this paper, we investigated and evaluated the performance of different deep sequence models for anomalous traffic prediction. Several deep sequences models were implemented to predict real traffic without and with outliers and show the significance of outlier detection in realworld traffic prediction. First, two different outlier detection techniques, such as the ThreeSigma rule and Isolation Forest, were applied to identify the anomaly. Second, we adjusted those abnormal data points using the Backward Filling technique before training the model. Finally, the performance of different models was compared for abnormal and adjusted traffic. LSTMEncoderDecoder LSTMEnDe is the best prediction model in our experiment, reducing the deviation between actual and predicted traffic by more than 11 after adjusting the outliers. All other models, including Recurrent Neural Network RNN, Long ShortTerm Memory LSTM, LSTMEnDe with Attention layer LSTMEnDeAtn, Gated Recurrent Unit GRU, show better prediction after replacing the outliers and decreasing prediction error by more than 29, 24, 19, and 10 respectively. Our experimental results indicate that the outliers in the data can significantly impact the quality of the prediction. Thus, outlier detection and mitigation assist the deep sequence model in learning the general trend and making better predictions.
Modeling Human Behavior Part I Learning and Belief Approaches ; There is a clear desire to model and comprehend human behavior. Trends in research covering this topic show a clear assumption that many view human reasoning as the presupposed standard in artificial reasoning. As such, topics such as game theory, theory of mind, machine learning, etc. all integrate concepts which are assumed components of human reasoning. These serve as techniques to attempt to both replicate and understand the behaviors of humans. In addition, next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams. To make this possible, autonomous agents will require the ability to embed practical models of human behavior, which allow them not only to replicate human models as a technique to learn, but to to understand the actions of users and anticipate their behavior, so as to truly operate in symbiosis with them. The main objective of this paper it to provide a succinct yet systematic review of the most important approaches in two areas dealing with quantitative models of human behaviors. Specifically, we focus on i techniques which learn a model or policy of behavior through exploration and feedback, such as Reinforcement Learning, and ii directly model mechanisms of human reasoning, such as beliefs and bias, without going necessarily learning via trialanderror.