text
stringlengths
62
2.94k
Recurrent Neural Network Attention Mechanisms for Interpretable System Log Anomaly Detection ; Deep learning has recently demonstrated stateofthe art performance on key tasks related to the maintenance of computer systems, such as intrusion detection, denial of service attack detection, hardware and software system failures, and malware detection. In these contexts, model interpretability is vital for administrator and analyst to trust and act on the automated analysis of machine learning models. Deep learning methods have been criticized as black box oracles which allow limited insight into decision factors. In this work we seek to bridge the gap between the impressive performance of deep learning models and the need for interpretable model introspection. To this end we present recurrent neural network RNN language models augmented with attention for anomaly detection in system logs. Our methods are generally applicable to any computer system and logging source. By incorporating attention variants into our RNN language models we create opportunities for model introspection and analysis without sacrificing stateofthe art performance. We demonstrate model performance and illustrate model interpretability on an intrusion detection task using the Los Alamos National Laboratory LANL cyber security dataset, reporting upward of 0.99 area under the receiver operator characteristic curve despite being trained only on a single day's worth of data.
Deep Choice Model Using Pointer Networks for Airline Itinerary Prediction ; Travel providers such as airlines and online travel agents are becoming more and more interested in understanding how passengers choose among alternative itineraries when searching for flights. This knowledge helps them better display and adapt their offer, taking into account market conditions and customer needs. Some common applications are not only filtering and sorting alternatives, but also changing certain attributes in realtime e.g., changing the price. In this paper, we concentrate with the problem of modeling air passenger choices of flight itineraries. This problem has historically been tackled using classical Discrete Choice Modelling techniques. Traditional statistical approaches, in particular the Multinomial Logit model MNL, is widely used in industrial applications due to its simplicity and general good performance. However, MNL models present several shortcomings and assumptions that might not hold in real applications. To overcome these difficulties, we present a new choice model based on Pointer Networks. Given an input sequence, this type of deep neural architecture combines Recurrent Neural Networks with the Attention Mechanism to learn the conditional probability of an output whose values correspond to positions in an input sequence. Therefore, given a sequence of different alternatives presented to a customer, the model can learn to point to the one most likely to be chosen by the customer. The proposed method was evaluated on a real dataset that combines online user search logs and airline flight bookings. Experimental results show that the proposed model outperforms the traditional MNL model on several metrics.
Finitary codings for spatial mixing Markov random fields ; It has been shown by van den Berg and Steif that the subcritical and critical Ising model on mathbbZd is a finitary factor of an i.i.d. process ffiid, whereas the supercritical model is not. In fact, they showed that the latter is a general phenomenon in that a phase transition presents an obstruction for being ffiid. The question remained whether this is the only such obstruction. We make progress on this, showing that certain spatial mixing conditions notions of weak dependence on boundary conditions, not to be confused with other notions of mixing in ergodic theory imply ffiid. Our main result is that weak spatial mixing implies ffiid with powerlaw tails for the coding radius, and that strong spatial mixing implies ffiid with exponential tails for the coding radius. The weak spatial mixing condition can be relaxed to a condition which is satisfied by some critical twodimensional models. Using a result of the author, we deduce that strong spatial mixing also implies ffiid with stretchedexponential tails from a finitevalued i.i.d. process. We give several applications to models such as the Potts model, proper colorings, the hardcore model, the WidomRowlinson model and the beach model. For instance, for the ferromagnetic qstate Potts model on mathbbZd at inverse temperature beta, we show that it is ffiid with exponential tails if beta is sufficiently small, it is ffiid if beta betacq,d, it is not ffiid if beta betacq,d and, when d2 and betabetacq,d, it is ffiid if and only if q le 4.
Phenomenological models of twoparticle correlation distributions on transverse momentum in relativistic heavyion collisions ; Twoparticle, pairnumber correlation distributions on twodimensional transverse momentum pt1,pt2 constructed from the particle production in relativistic heavyion collisions allow access to dynamical processes in these systems beyond what can be studied with angular correlations alone. Only a few measurements of this type have been reported in the literature and phenomenological models, which facilitate physical interpretation of the correlation structures, are nonexistent. Ongoing effort at the Relativistic HeavyIon Collider RHIC will provide a significant volume of these correlation measurements in the future. In anticipation of these new data two phenomenological models are developed which describe twodimensional 2D correlation distributions on transverse momentum. One model is based on a collision eventbyevent fluctuating blast wave. The other is based on eventbyevent fluctuations in fragmenting colorflux tubes and in jets. Both models are shown to be capable of accurately describing the measured singleparticle pt distributions for minimumbias AuAu collisions at sqrtsrm NN 200GeV. Both models are then applied to preliminary, chargedparticle correlation measurements on 2D transverse momentum. The capabilities of the two models for describing the overall structure of these correlations, the stability of the fitting results with respect to collision centrality, and the resulting trends of the dynamical fluctuations are evaluated. In general, both phenomenological models are capable of qualitatively describing the major correlation structures on transverse momentum and can be used to establish the required magnitudes and centrality trends of the fluctuations. Both models will be useful for interpreting the forthcoming correlation data from the RHIC.
Diversity in Machine Learning ; Machine learning methods have achieved good performance and been widely applied in various realworld applications. They can learn the model adaptively and be better fit for special requirements of different tasks. Generally, a good machine learning system is composed of plentiful training data, a good model training process, and an accurate inference. Many factors can affect the performance of the machine learning process, among which the diversity of the machine learning process is an important one. The diversity can help each procedure to guarantee a total good machine learning diversity of the training data ensures that the training data can provide more discriminative information for the model, diversity of the learned model diversity in parameters of each model or diversity among different base models makes each parametermodel capture unique or complement information and the diversity in inference can provide multiple choices each of which corresponds to a specific plausible local optimal result. Even though the diversity plays an important role in machine learning process, there is no systematical analysis of the diversification in machine learning system. In this paper, we systematically summarize the methods to make data diversification, model diversification, and inference diversification in the machine learning process, respectively. In addition, the typical applications where the diversity technology improved the machine learning performance have been surveyed, including the remote sensing imaging tasks, machine translation, camera relocalization, image segmentation, object detection, topic modeling, and others. Finally, we discuss some challenges of the diversity technology in machine learning and point out some directions in future work.
Dynamic Sampling from Graphical Models ; In this paper, we study the problem of sampling from a graphical model when the model itself is changing dynamically with time. This problem derives its interest from a variety of inference, learning, and sampling settings in machine learning, computer vision, statistical physics, and theoretical computer science. While the problem of sampling from a static graphical model has received considerable attention, theoretical works for its dynamic variants have been largely lacking. The main contribution of this paper is an algorithm that can sample dynamically from a broad class of graphical models over discrete random variables. Our algorithm is parallel and Las Vegas it knows when to stop and it outputs samples from the exact distribution. We also provide sufficient conditions under which this algorithm runs in time proportional to the size of the update, on general graphical models as well as wellstudied specific spin systems. In particular we obtain, for the Ising model ferromagnetic or antiferromagnetic and for the hardcore model the first dynamic sampling algorithms that can handle both edge and vertex updates addition, deletion, change of functions, both efficient within regimes that are close to the respective uniqueness regimes, beyond which, even for the static and approximate sampling, no local algorithms were known or the problem itself is intractable. Our dynamic sampling algorithm relies on a local resampling algorithm and a new equilibrium property that is shown to be satisfied by our algorithm at each step, and enables us to prove its correctness. This equilibrium property is robust enough to guarantee the correctness of our algorithm, helps us improve bounds on fast convergence on specific models, and should be of independent interest.
Unsupervised Image Noise Modeling with SelfConsistent GAN ; Noise modeling lies in the heart of many image processing tasks. However, existing deep learning methods for noise modeling generally require clean and noisy image pairs for model training; these image pairs are difficult to obtain in many realistic scenarios. To ameliorate this problem, we propose a selfconsistent GAN SCGAN, that can directly extract noise maps from noisy images, thus enabling unsupervised noise modeling. In particular, the SCGAN introduces three novel selfconsistent constraints that are complementary to one another, viz. the noise model should produce a zero response over a clean input; the noise model should return the same output when fed with a specific pure noise input; and the noise model also should reextract a pure noise map if the map is added to a clean image. These three constraints are simple yet effective. They jointly facilitate unsupervised learning of a noise model for various noise types. To demonstrate its wide applicability, we deploy the SCGAN on three image processing tasks including blind image denoising, rain streak removal, and noisy image superresolution. The results demonstrate the effectiveness and superiority of our method over the stateoftheart methods on a variety of benchmark datasets, even though the noise types vary significantly and paired clean images are not available.
A Review on Neural Network Models of Schizophrenia and Autism Spectrum Disorder ; This survey presents the most relevant neural network models of autism spectrum disorder and schizophrenia, from the first connectionist models to recent deep network architectures. We analyzed and compared the most representative symptoms with its neural model counterpart, detailing the alteration introduced in the network that generates each of the symptoms, and identifying their strengths and weaknesses. We additionally crosscompared Bayesian and freeenergy approaches, as they are widely applied to modeling psychiatric disorders and share basic mechanisms with neural networks. Models of schizophrenia mainly focused on hallucinations and delusional thoughts using neural dysconnections or inhibitory imbalance as the predominating alteration. Models of autism rather focused on perceptual difficulties, mainly excessive attention to environment details, implemented as excessive inhibitory connections or increased sensory precision. We found an excessive tight view of the psychopathologies around one specific and simplified effect, usually constrained to the technical idiosyncrasy of the used network architecture. Recent theories and evidence on sensorimotor integration and body perception combined with modern neural network architectures could offer a broader and novel spectrum to approach these psychopathologies. This review emphasizes the power of artificial neural networks for modeling some symptoms of neurological disorders but also calls for further developing these techniques in the field of computational psychiatry.
fbSAT Automatic Inference of Minimal FiniteState Models of Function Blocks Using SAT Solver ; Finitestate models are widely used in software engineering, especially in control systems development. Commonly, in control applications such models are developed manually, hence, keeping them uptodate requires extra effort. To simplify the maintenance process, an automatic approach may be used, allowing to infer models from behavior examples and temporal properties. As an example of a specific control systems development application we focus on inferring finitestate models of function blocks FBs defined by the IEC 61499 international standard for distributed automation systems. In this paper we propose a method for FB model inference from behavior examples based on reduction to Boolean satisfiability problem SAT. Additionally, we take into account linear temporal properties using counterexampleguided synthesis. We also present the developed tool fbSAT which implements the proposed method, and evaluate it in two case studies inference of a finitestate model of a PickandPlace manipulator, and reconstruction of randomly generated automata. In contrast to existing approaches, the suggested method is more efficient and produces finitestate models minimal both in terms of number of states and guard conditions complexity.
Adaptive Susceptibility and Heterogeneity in Contagion Models on Networks ; Contagious processes, such as spread of infectious diseases, social behaviors, or computer viruses, affect biological, social, and technological systems. Epidemic models for large populations and finite populations on networks have been used to understand and control both transient and steadystate behaviors. Typically it is assumed that after recovery from an infection, every agent will either return to its original susceptible state or acquire full immunity to reinfection. We study the network SIRI SusceptibleInfectedRecoveredInfected model, an epidemic model for the spread of contagious processes on a network of heterogeneous agents that can adapt their susceptibility to reinfection. The model generalizes existing models to accommodate realistic conditions in which agents acquire partial or compromised immunity after first exposure to an infection. We prove necessary and sufficient conditions on model parameters and network structure that distinguish four dynamic regimes infectionfree, epidemic, endemic, and bistable. For the bistable regime, which is not accounted for in traditional models, we show how there can be a rapid resurgent epidemic after what looks like convergence to an infectionfree population. We use the model and its predictive capability to show how control strategies can be designed to mitigate problematic contagious behaviors.
Sustainable Business Models A Review ; The concept of the sustainable business model describes the rationale of how an organization creates, delivers, and captures value, in economic, social, cultural, or other contexts, in a sustainable way. The process of sustainable business model construction forms an innovative part of a business strategy. Different industries and businesses have utilized sustainable business models concept to satisfy their economic, environmental, and social goals simultaneously. However, the success, popularity, and progress of sustainable business models in different application domains are not clear. To explore this issue, this research provides a comprehensive review of sustainable business models literature in various application areas. Notable sustainable business models are identified and further classified in fourteen unique categories, and in every category, the progress either failure or success has been reviewed, and the research gaps are discussed. Taxonomy of the applications includes innovation, management and marketing, entrepreneurship, energy, fashion, healthcare, agrifood, supply chain management, circular economy, developing countries, engineering, construction and real estate, mobility and transportation, and hospitality. The key contribution of this study is that it provides an insight into the state of the art of sustainable business models in the various application areas and future research directions. This paper concludes that popularity and the success rate of sustainable business models in all application domains have been increased along with the increasing use of advanced technologies.
BARK Open Behavior Benchmarking in MultiAgent Environments ; Predicting and planning interactive behaviors in complex traffic situations presents a challenging task. Especially in scenarios involving multiple traffic participants that interact densely, autonomous vehicles still struggle to interpret situations and to eventually achieve their own mission goal. As driving tests are costly and challenging scenarios are hard to find and reproduce, simulation is widely used to develop, test, and benchmark behavior models. However, most simulations rely on datasets and simplistic behavior models for traffic participants and do not cover the full variety of realworld, interactive human behaviors. In this work, we introduce BARK, an opensource behavior benchmarking environment designed to mitigate the shortcomings stated above. In BARK, behavior models are reused for planning, prediction, and simulation. A range of models is currently available, such as MonteCarlo Tree Search and Reinforcement Learningbased behavior models. We use a public dataset and samplingbased scenario generation to show the interexchangeability of behavior models in BARK. We evaluate how well the models used cope with interactions and how robust they are towards exchanging behavior models. Our evaluation shows that BARK provides a suitable framework for a systematic development of behavior models.
A Modular SmallSignal Analysis Framework for Inverter Penetrated Power Grids Measurement, Assembling, Aggregation, and Stability Assessment ; Unprecedented dynamic phenomena may appear in power grids due to higher and higher penetration of inverterbased resources IBR, e.g., wind and solar photovoltaic PV. A major challenge in dynamic modeling and analysis is that unlike synchronous generators, whose analytical models are well studied and known to system planners, inverter models are proprietary information with black box models provided to utilities. Thus, measurement based characterization of IBR is a popular approach to find frequencydomain response of an IBR. The resulting admittances are essentially smallsignal currentvoltage relationship in frequency domain. Integrating admittances for grid dynamic modeling and analysis requires a new framework, namely modular smallsignal analysis framework. In this visionary paper, we examine the current stateoftheart of dynamic modeling and analysis of power grids with IBR, including inverter admittance characterization, the procedure of component assembling and aggregation, and stability assessment. We push forward a computing efficient modular modeling and analysis framework via four visions i efficient and accurate admittance model characterization via model building and timedomain responses, ii accurate assembling of components, iii efficient aggregation, and iv stability assessment relying on network admittance matrices. Challenges of admittancebased modular analysis are demonstrated using examples and techniques to tackle those challenges are pointed out in this visionary paper.
Identification and Estimation of Weakly Separable Models Without Monotonicity ; We study the identification and estimation of treatment effect parameters in weakly separable models. In their seminal work, Vytlacil and Yildiz 2007 showed how to identify and estimate the average treatment effect of a dummy endogenous variable when the outcome is weakly separable in a single index. Their identification result builds on a monotonicity condition with respect to this single index. In comparison, we consider similar weakly separable models with multiple indices, and relax the monotonicity condition for identification. Unlike Vytlacil and Yildiz 2007, we exploit the full information in the distribution of the outcome variable, instead of just its mean. Indeed, when the outcome distribution function is more informative than the mean, our method is applicable to more general settings than theirs; in particular we do not rely on their monotonicity assumption and at the same time we also allow for multiple indices. To illustrate the advantage of our approach, we provide examples of models where our approach can identify parameters of interest whereas existing methods would fail. These examples include models with multiple unobserved disturbance terms such as the Roy model and multinomial choice models with dummy endogenous variables, as well as potential outcome models with endogenous random coefficients. Our method is easy to implement and can be applied to a wide class of models. We establish standard asymptotic properties such as consistency and asymptotic normality.
Robustness to Incorrect Models and DataDriven Learning in AverageCost Optimal Stochastic Control ; We study continuity and robustness properties of infinitehorizon average expected cost problems with respect to controlled transition kernels, and applications of these results to the problem of robustness of control policies designed for approximate models applied to actual systems. We show that sufficient conditions presented in the literature for discountedcost problems are in general not sufficient to ensure robustness for averagecost problems. However, we show that the average optimal cost is continuous in the convergences of controlled transition kernel models where convergence of models entails i continuous weak convergence in state and actions, and ii continuous setwise convergence in the actions for every fixed state variable, in addition to either uniform ergodicity or some regularity conditions. We establish that the mismatch error due to the application of a control policy designed for an incorrectly estimated model to the true model decreases to zero as the incorrect model approaches the true model under the stated convergence criteria. Our findings significantly relax related studies in the literature which have primarily considered the more restrictive total variation convergence criteria. Applications to robustness to models estimated through empirical data where almost sure weak convergence criterion typically holds, but stronger criteria do not are studied and conditions for asymptotic robustness to datadriven learning are established.
Semantic Complexity in EndtoEnd Spoken Language Understanding ; Endtoend spoken language understanding SLU models are a class of model architectures that predict semantics directly from speech. Because of their input and output types, we refer to them as speechtointerpretation STI models. Previous works have successfully applied STI models to targeted use cases, such as recognizing home automation commands, however no study has yet addressed how these models generalize to broader use cases. In this work, we analyze the relationship between the performance of STI models and the difficulty of the use case to which they are applied. We introduce empirical measures of dataset semantic complexity to quantify the difficulty of the SLU tasks. We show that nearperfect performance metrics for STI models reported in the literature were obtained with datasets that have low semantic complexity values. We perform experiments where we vary the semantic complexity of a large, proprietary dataset and show that STI model performance correlates with our semantic complexity measures, such that performance increases as complexity values decrease. Our results show that it is important to contextualize an STI model's performance with the complexity values of its training dataset to reveal the scope of its applicability.
Navigating Human Language Models with Synthetic Agents ; Modern natural language models such as the GPT2GPT3 contain tremendous amounts of information about human belief in a consistently testable form. If these models could be shown to accurately reflect the underlying beliefs of the human beings that produced the data used to train these models, then such models become a powerful sociological tool in ways that are distinct from traditional methods, such as interviews and surveys. In this study, We train a version of the GPT2 on a corpora of historical chess games, and then launch clusters of synthetic agents into the model, using text strings to create context and orientation. We compare the trajectories contained in the text generated by the agentsmodel and compare that to the known ground truth of the chess board, move legality, and historical patterns of play. We find that the percentages of moves by piece using the model are substantially similar from human patterns. We further find that the model creates an accurate latent representation of the chessboard, and that it is possible to plot trajectories of legal moves across the board using this knowledge.
Modelling uncertainty in coupled electricity and gas systems is it worth the effort ; The interdependence of electricity and natural gas markets is becoming a major topic in energy research. Integrated energy models are used to assist decisionmaking for businesses and policymakers addressing challenges of energy transition and climate change. The analysis of complex energy systems requires largescale models, which are based on extensive databases, intertemporal dynamics and a multitude of decision variables. Integrating such energy system models results in increased system complexity. This complexity poses a challenge for energy modellers to address multiple uncertainties that affect both markets. Stochastic optimisation approaches enable an adequate consideration of uncertainties in investment and operation planning; however, stochastic modelling of integrated largescale energy systems further scales the level of complexity. In this paper, we combine integrated and stochastic optimisation problems and parametrise our model for European electricity and gas markets. We analyse and compare the impact of uncertain input parameters, such as gas and electricity demand, renewable energy capacities and fuel and CO2 prices, on the quality of the solution obtained in the integrated optimisation problem. Our results quantify the value of encoding uncertainty as a part of a model. While the methodological contribution should be of interest for energy modellers, our findings are relevant for industry experts and stakeholders with an empirical interest in the European energy system.
SequencetoSequence Predictive Model From Prosody To Communicative Gestures ; Communicative gestures and speech acoustic are tightly linked. Our objective is to predict the timing of gestures according to the acoustic. That is, we want to predict when a certain gesture occurs. We develop a model based on a recurrent neural network with attention mechanism. The model is trained on a corpus of natural dyadic interaction where the speech acoustic and the gesture phases and types have been annotated. The input of the model is a sequence of speech acoustic and the output is a sequence of gesture classes. The classes we are using for the model output is based on a combination of gesture phases and gesture types. We use a sequence comparison technique to evaluate the model performance. We find that the model can predict better certain gesture classes than others. We also perform ablation studies which reveal that fundamental frequency is a relevant feature for gesture prediction task. In another subexperiment, we find that including eyebrow movements as acting as beat gesture improves the performance. Besides, we also find that a model trained on the data of one given speaker also works for the other speaker of the same conversation. We also perform a subjective experiment to measure how respondents judge the naturalness, the time consistency, and the semantic consistency of the generated gesture timing of a virtual agent. Our respondents rate the output of our model favorably.
Negative cosmological constant in the dark sector ; We consider the possibility that the dark sector of our Universe contains a negative cosmological constant dubbed lambda. For such models to be viable, the dark sector should contain an additional component responsible for the latetime accelerated expansion rate X. We explore the departure of the expansion history of these models from the concordance Lambda Cold Dark Matter model. For a large class of our models the accelerated expansion is transient with a nontrivial dependence on the model parameters. All models with wX1 will eventually contract and we derive an analytical expression for the scale factor at in the neighborhood of its maximal value. We find also the scale factor for models ending in a Big Rip in the regime where dustlike matter density is negligible compared to lambda. We address further the viability of such models, in particular when a high H0 is taken into account. While we find no decisive evidence for a nonzero lambda, the best models are obtained with a phantom behavior on redshifts zgtrsim 1 with a higher evidence for nonzero lambda. An observed value for h substantially higher than 0.70 would be a decisive test of their viability.
Uncertainty quantification for Markov Random Fields ; We present an informationbased uncertainty quantification method for general Markov Random Fields. Markov Random Fields MRF are structured, probabilistic graphical models over undirected graphs, and provide a fundamental unifying modeling tool for statistical mechanics, probabilistic machine learning, and artificial intelligence. Typically MRFs are complex and highdimensional with nodes and edges connections built in a modular fashion from simpler, lowdimensional probabilistic models and their local connections; in turn, this modularity allows to incorporate available data to MRFs and efficiently simulate them by leveraging their graphtheoretic structure. Learning graphical models from data andor constructing them from physical modeling and constraints necessarily involves uncertainties inherited from data, modeling choices, or numerical approximations. These uncertainties in the MRF can be manifested either in the graph structure or the probability distribution functions, and necessarily will propagate in predictions for quantities of interest. Here we quantify such uncertainties using tight, information based bounds on the predictions of quantities of interest; these bounds take advantage of the graphical structure of MRFs and are capable of handling the inherent highdimensionality of such graphical models. We demonstrate our methods in MRFs for medical diagnostics and statistical mechanics models. In the latter, we develop uncertainty quantification bounds for finite size effects and phase diagrams, which constitute two of the typical predictions goals of statistical mechanics modeling.
Training Deep Neural Networks with Constrained Learning Parameters ; Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics low error, low memory, and low power. We believe that deep neural networks DNNs, where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm CoNNTrA, that leverages a coordinate gradient descentbased approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison i Training error; ii Validation error; iii Memory usage; and iv Training time. Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
Adaptive Reinforcement Learning Model for Simulation of Urban Mobility during Crises ; The objective of this study is to propose and test an adaptive reinforcement learning model that can learn the patterns of human mobility in a normal context and simulate the mobility during perturbations caused by crises, such as flooding, wildfire, and hurricanes. Understanding and predicting human mobility patterns, such as destination and trajectory selection, can inform emerging congestion and road closures raised by disruptions in emergencies. Data related to human movement trajectories are scarce, especially in the context of emergencies, which places a limitation on applications of existing urban mobility models learned from empirical data. Models with the capability of learning the mobility patterns from data generated in normal situations and which can adapt to emergency situations are needed to inform emergency response and urban resilience assessments. To address this gap, this study creates and tests an adaptive reinforcement learning model that can predict the destinations of movements, estimate the trajectory for each origin and destination pair, and examine the impact of perturbations on humans' decisions related to destinations and movement trajectories. The application of the proposed model is shown in the context of Houston and the flooding scenario caused by Hurricane Harvey in August 2017. The results show that the model can achieve more than 76 precision and recall. The results also show that the model could predict traffic patterns and congestion resulting from to urban flooding. The outcomes of the analysis demonstrate the capabilities of the model for analyzing urban mobility during crises, which can inform the public and decisionmakers about the response strategies and resilience planning to reduce the impacts of crises on urban mobility.
Geometryaware neural solver for fast Bayesian calibration of brain tumor models ; Modeling of brain tumor dynamics has the potential to advance therapeutic planning. Current modeling approaches resort to numerical solvers that simulate the tumor progression according to a given differential equation. Using highlyefficient numerical solvers, a single forward simulation takes up to a few minutes of compute. At the same time, clinical applications of tumor modeling often imply solving an inverse problem, requiring up to tens of thousands forward model evaluations when used for a Bayesian model personalization via sampling. This results in a total inference time prohibitively expensive for clinical translation. While recent datadriven approaches become capable of emulating physics simulation, they tend to fail in generalizing over the variability of the boundary conditions imposed by the patientspecific anatomy. In this paper, we propose a learnable surrogate for simulating tumor growth which maps the biophysical model parameters directly to simulation outputs, i.e. the local tumor cell densities, whilst respecting patient geometry. We test the neural solver on Bayesian tumor model personalization for a cohort of glioma patients. Bayesian inference using the proposed surrogate yields estimates analogous to those obtained by solving the forward model with a regular numerical solver. The nearrealtime computation cost renders the proposed method suitable for clinical settings. The code is available at httpsgithub.comIvanEztumorsurrogate.
Dialogueadaptive Language Model Pretraining From Quality Estimation ; Pretrained language models PrLMs have achieved great success on a wide range of natural language processing tasks by virtue of the universal language representation ability obtained by selfsupervised learning on a large corpus. These models are pretrained on standard plain texts with general language model LM training objectives, which would be insufficient to model dialogueexclusive attributes like specificity and informativeness reflected in these tasks that are not explicitly captured by the pretrained universal language representations. In this work, we propose dialogueadaptive pretraining objectives DAPO derived from quality estimation to simulate dialoguespecific features, namely coherence, specificity, and informativeness. As the foundation for model pretraining, we synthesize a new dialogue corpus and build our training set with two unsupervised methods 1 coherenceoriented context corruption, including utterance ordering, insertion, and replacement, to help the model capture the coherence inside the dialogue contexts; and 2 specificityoriented automatic rescoring, which encourages the model to measure the quality of the synthesized data for dialogueadaptive pretraining by considering specificity and informativeness. Experimental results on widely used opendomain response selection and quality estimation benchmarks show that DAPO significantly improves the baseline models and achieves stateoftheart performance on the MuTual leaderboard, verifying the effectiveness of estimating quality evaluation factors into pretraining.
Neural Modelbased Optimization with RightCensored Observations ; In many fields of study, we only observe lower bounds on the true response value of some experiments. When fitting a regression model to predict the distribution of the outcomes, we cannot simply drop these rightcensored observations, but need to properly model them. In this work, we focus on the concept of censored data in the light of modelbased optimization where prematurely terminating evaluations and thus generating rightcensored data is a key factor for efficiency, e.g., when searching for an algorithm configuration that minimizes runtime of the algorithm at hand. Neural networks NNs have been demonstrated to work well at the core of modelbased optimization procedures and here we extend them to handle these censored observations. We propose ia loss function based on the Tobit model to incorporate censored samples into training and ii use an ensemble of networks to model the posterior distribution. To nevertheless be efficient in terms of optimizationoverhead, we propose to use Thompson sampling s.t. we only need to train a single NN in each iteration. Our experiments show that our trained regression models achieve a better predictive quality than several baselines and that our approach achieves new stateoftheart performance for modelbased optimization on two optimization problems minimizing the solution time of a SAT solver and the timetoaccuracy of neural networks.
Decentralized Federated Learning Preserves Model and Data Privacy ; The increasing complexity of IT systems requires solutions, that support operations in case of failure. Therefore, Artificial Intelligence for System Operations AIOps is a field of research that is becoming increasingly focused, both in academia and industry. One of the major issues of this area is the lack of access to adequately labeled data, which is majorly due to legal protection regulations or industrial confidentiality. Methods to mitigate this stir from the area of federated learning, whereby no direct access to training data is required. Original approaches utilize a central instance to perform the model synchronization by periodical aggregation of all model parameters. However, there are many scenarios where trained models cannot be published since its either confidential knowledge or training data could be reconstructed from them. Furthermore the central instance needs to be trusted and is a single point of failure. As a solution, we propose a fully decentralized approach, which allows to share knowledge between trained models. Neither original training data nor model parameters need to be transmitted. The concept relies on teacher and student roles that are assigned to the models, whereby students are trained on the output of their teachers via synthetically generated input data. We conduct a case study on log anomaly detection. The results show that an untrained student model, trained on the teachers output reaches comparable F1scores as the teacher. In addition, we demonstrate that our method allows the synchronization of several models trained on different distinct training data subsets.
Ownership Verification of DNN Architectures via Hardware Cache Side Channels ; Deep Neural Networks DNN are gaining higher commercial values in computer vision applications, e.g., image classification, video analytics, etc. This calls for urgent demands of the intellectual property IP protection of DNN models. In this paper, we present a novel watermarking scheme to achieve the ownership verification of DNN architectures. Existing works all embedded watermarks into the model parameters while treating the architecture as public property. These solutions were proven to be vulnerable by an adversary to detect or remove the watermarks. In contrast, we claim the model architectures as an important IP for model owners, and propose to implant watermarks into the architectures. We design new algorithms based on Neural Architecture Search NAS to generate watermarked architectures, which are unique enough to represent the ownership, while maintaining high model usability. Such watermarks can be extracted via sidechannelbased model extraction techniques with high fidelity. We conduct comprehensive experiments on watermarked CNN models for image classification tasks and the experimental results show our scheme has negligible impact on the model performance, and exhibits strong robustness against various model transformations and adaptive attacks.
Quantifying and Mitigating Privacy Risks of Contrastive Learning ; Data is the key factor to drive the development of machine learning ML during the past decade. However, highquality data, in particular labeled data, is often hard and expensive to collect. To leverage largescale unlabeled data, selfsupervised learning, represented by contrastive learning, is introduced. The objective of contrastive learning is to map different views derived from a training sample e.g., through data augmentation closer in their representation space, while different views derived from different samples more distant. In this way, a contrastive model learns to generate informative representations for data samples, which are then used to perform downstream ML tasks. Recent research has shown that machine learning models are vulnerable to various privacy attacks. However, most of the current efforts concentrate on models trained with supervised learning. Meanwhile, data samples' informative representations learned with contrastive learning may cause severe privacy risks as well. In this paper, we perform the first privacy analysis of contrastive learning through the lens of membership inference and attribute inference. Our experimental results show that contrastive models trained on image datasets are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models. The former is due to the fact that contrastive models are less prone to overfitting, while the latter is caused by contrastive models' capability of representing data samples expressively. To remedy this situation, we propose the first privacypreserving contrastive learning mechanism, Talos, relying on adversarial training. Empirical results show that Talos can successfully mitigate attribute inference risks for contrastive models while maintaining their membership privacy and model utility.
Adversarial Poisoning Attacks and Defense for General MultiClass Models Based On Synthetic Reduced Nearest Neighbors ; Stateoftheart machine learning models are vulnerable to data poisoning attacks whose purpose is to undermine the integrity of the model. However, the current literature on data poisoning attacks is mainly focused on ad hoc techniques that are only applicable to specific machine learning models. Additionally, the existing data poisoning attacks in the literature are limited to either binary classifiers or to gradientbased algorithms. To address these limitations, this paper first proposes a novel modelfree labelflipping attack based on the multimodality of the data, in which the adversary targets the clusters of classes while constrained by a labelflipping budget. The complexity of our proposed attack algorithm is linear in time over the size of the dataset. Also, the proposed attack can increase the error up to two times for the same attack budget. Second, a novel defense technique based on the Synthetic Reduced Nearest Neighbor SRNN model is proposed. The defense technique can detect and exclude flipped samples on the fly during the training procedure. Through extensive experimental analysis, we demonstrate that i the proposed attack technique can deteriorate the accuracy of several models drastically, and ii under the proposed attack, the proposed defense technique significantly outperforms other conventional machine learning models in recovering the accuracy of the targeted model.
PAQ 65 Million ProbablyAsked Questions and What You Can Do With Them ; Opendomain Question Answering models which directly leverage questionanswer QA pairs, such as closedbook QA CBQA models and QApair retrievers, show promise in terms of speed and memory compared to conventional models which retrieve and read from text corpora. QApair retrievers also offer interpretable answers, a high degree of control, and are trivial to update at test time with new knowledge. However, these models lack the accuracy of retrieveandread systems, as substantially less knowledge is covered by the available QApairs relative to text corpora like Wikipedia. To facilitate improved QApair models, we introduce Probably Asked Questions PAQ, a very large resource of 65M automaticallygenerated QApairs. We introduce a new QApair retriever, RePAQ, to complement PAQ. We find that PAQ preempts and caches test questions, enabling RePAQ to match the accuracy of recent retrieveandread models, whilst being significantly faster. Using PAQ, we train CBQA models which outperform comparable baselines by 5, but trail RePAQ by over 15, indicating the effectiveness of explicit retrieval. RePAQ can be configured for size under 500MB or speed over 1K questions per second whilst retaining high accuracy. Lastly, we demonstrate RePAQ's strength at selective QA, abstaining from answering when it is likely to be incorrect. This enables RePAQ to backoff to a more expensive stateoftheart model, leading to a combined system which is both more accurate and 2x faster than the stateoftheart model alone.
CrossModal TransformerBased Neural Correction Models for Automatic Speech Recognition ; We propose a crossmodal transformerbased neural correction models that refines the output of an automatic speech recognition ASR system so as to exclude ASR errors. Generally, neural correction models are composed of encoderdecoder networks, which can directly model sequencetosequence mapping problems. The most successful method is to use both input speech and its ASR output text as the input contexts for the encoderdecoder networks. However, the conventional method cannot take into account the relationships between these two different modal inputs because the input contexts are separately encoded for each modal. To effectively leverage the correlated information between the two different modal inputs, our proposed models encode two different contexts jointly on the basis of crossmodal selfattention using a transformer. We expect that crossmodal selfattention can effectively capture the relationships between two different modals for refining ASR hypotheses. We also introduce a shallow fusion technique to efficiently integrate the firstpass ASR model and our proposed neural correction model. Experiments on Japanese natural language ASR tasks demonstrated that our proposed models achieve better ASR performance than conventional neural correction models.
Benchmarking multiwaveletbased dynamic and static nonuniform grid solvers for flood inundation modelling ; This paper explores static nonuniform grid solvers that adapt three rasterbased flood models on an optimised nonuniform grid the secondorder discontinuous Galerkin DG2 model representing the modelled data as piecewiseplanar fields, the firstorder finite volume FV1 model using piecewiseconstant fields, and the local inertial ACC model only evolving piecewiseconstant water depth fields. The optimised grid is generated by applying the multiresolution analysis MRA of multiwavelets MWs to piecewiseplanar representation of rasterformatted topography data, for more sensible grid coarsening based on one userspecified parameter. Two adaptive solvers are also explored that apply the MRA of MWs and of Haar wavelets HWs to, respectively, scale and adapt the DG2 MWDG2 and FV1 HWFV1 modelled data dynamically in time. The performance of the nonuniform grid and adaptive solvers is assessed in terms of flood depth and extent, velocities, and CPU runtimes, with reference to the rasterbased DG2 model predictions on their finest resolution grid. The assessments considered three largescale flooding scenarios, involving rapid and slowtogradual flows. MWDG2 is found to be the most favourable choice when modelling rapid flows, where it excels in capturing small velocity variations. For slowtogradual flows, the adaptive solvers deliver less accurate outcomes, and their efficiency can be hampered by overhead costs of the dynamic MRA. Instead, nonuniform DG2 is recommended to capture urban flow interactions more accurately. Nonuniform ACC is 5 times faster to run than nonuniform DG2 but delivers close flooding depth and extent predictions, thus is more attractive for fluvialpluvial flood simulation over large areas.
LearningToEnsemble by Contextual Rank Aggregation in ECommerce ; Ensemble models in Ecommerce combine predictions from multiple submodels for ranking and revenue improvement. Industrial ensemble models are typically deep neural networks, following the supervised learning paradigm to infer conversion rate given inputs from submodels. However, this process has the following two problems. Firstly, the pointwise scoring approach disregards the relationships between items and leads to homogeneous displayed results, while diversified display benefits user experience and revenue. Secondly, the learning paradigm focuses on the ranking metrics and does not directly optimize the revenue. In our work, we propose a new LearningToEnsemble LTE framework RAEGO, which replaces the ensemble model with a contextual Rank Aggregator RA and explores the best weights of submodels by the EvaluatorGenerator Optimization EGO. To achieve the best online performance, we propose a new rank aggregation algorithm TournamentGreedy as a refinement of classic rank aggregators, which also produces the best average weighted Kendall Tau Distance KTD amongst all the considered algorithms with quadratic time complexity. Under the assumption that the best output list should be Pareto Optimal on the KTD metric for submodels, we show that our RA algorithm has higher efficiency and coverage in exploring the optimal weights. Combined with the idea of Bayesian Optimization and gradient descent, we solve the online contextual BlackBox Optimization task that finds the optimal weights for submodels given a chosen RA model. RAEGO has been deployed in our online system and has improved the revenue significantly.
GLIME A new graphical methodology for interpretable modelagnostic explanations ; Explainable artificial intelligence XAI is an emerging new domain in which a set of processes and tools allow humans to better comprehend the decisions generated by black box models. However, most of the available XAI tools are often limited to simple explanations mainly quantifying the impact of individual features to the models' output. Therefore, human users are not able to understand how the features are related to each other to make predictions, whereas the inner workings of the trained models remain hidden. This paper contributes to the development of a novel graphical explainability tool that not only indicates the significant features of the model but also reveals the conditional relationships between features and the inference capturing both the direct and indirect impact of features to the models' decision. The proposed XAI methodology, termed as gLIME, provides graphical modelagnostic explanations either at the global for the entire dataset or the local scale for specific data points. It relies on a combination of local interpretable modelagnostic explanations LIME with graphical least absolute shrinkage and selection operator GLASSO producing undirected Gaussian graphical models. Regularization is adopted to shrink small partial correlation coefficients to zero providing sparser and more interpretable graphical explanations. Two wellknown classification datasets BIOPSY and OAI were selected to confirm the superiority of gLIME over LIME in terms of both robustness and consistency over multiple permutations. Specifically, gLIME accomplished increased stability over the two datasets with respect to features' importance 7696 compared to 5277 using LIME. gLIME demonstrates a unique potential to extend the functionality of the current stateoftheart in XAI by providing informative graphically given explanations that could unlock black boxes.
Lack of confidence in ABC model choice ; Approximate Bayesian computation ABC have become a essential tool for the analysis of complex stochastic models. Earlier, Grelaud et al. 2009 advocated the use of ABC for Bayesian model choice in the specific case of Gibbs random fields, relying on a intermodel sufficiency property to show that the approximation was legitimate. Having implemented ABCbased model choice in a wide range of phylogenetic models in the DIYABC software Cornuet et al., 2008, we now present theoretical background as to why a generic use of ABC for model choice is ungrounded, since it depends on an unknown amount of information loss induced by the use of insufficient summary statistics. The approximation error of the posterior probabilities of the models under comparison may thus be unrelated with the computational effort spent in running an ABC algorithm. We then conclude that additional empirical verifications of the performances of the ABC procedure as those available in DIYABC are necessary to conduct model choice.
Adaptive tworegime method application to front propagation ; The Adaptive TwoRegime Method ATRM is developed for hybrid multiscale stochastic simulation of reactiondiffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser latticebased models. The ATRM is a generalization of the previously developed TwoRegime Method Flegg et al, Journal of the Royal Society Interface, 2012 to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatiotemporal oscillations. In this paper, the ATRM is used for an indepth study of front propagation in a stochastic reactiondiffusion system which has its meanfield model given in terms of the Fisher equation Fisher, Annals of Eugenics, 1937. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on latticebased models, but there has been limited progress using offlattice Brownian dynamics models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher meanfield model. By modelling only the wavefront itself with the offlattice model, it is shown that the ATRM leads to the same Fisher wave results as purely offlattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.
Time series modeling by a regression approach based on a latent process ; Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization EM algorithm. The M step of the EM algorithm uses a multiclass Iterative Reweighted LeastSquares IRLS algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the BaumWelch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
The Nonlinear Analytical Envelope Equation in quadratic nonlinear crystals ; We here derive the socalled Nonlinear Analytical Envelope Equation NAEE inspired by the work of Conforti et al. M. Conforti, A. Marini, T. X. Tran, D. Faccio, and F. Biancalana, Interaction between optical fields and their conjugates in nonlinear media, Opt. Express 21, 3123931252 2013, whose notation we follow. We present a complete model that includes chi2 terms M. Conforti, F. Baronio, and C. De Angelis, Nonlinear envelope equation for broadband optical pulses in quadratic media, Phys. Rev. A 81, 053841 2010, chi3 terms, and then extend the model to delayed Raman effects in the chi3 term. We therefore get a complete model for ultrafast pulse propagation in quadratic nonlinear crystals similar to the Nonlinear Wave Equation in Frequency domain H. Guo, X. Zeng, B. Zhou, and M. Bache, Nonlinear wave equation in frequency domain accurate modeling of ultrafast interaction in anisotropic nonlinear media, J. Opt. Soc. Am. B 30, 494504 2013, but where the envelope is modelled rather than the electrical field while still keeping a subcarrier level resolution. The advantage of the envelope formation is that the physical origin of the additional terms that are included to model the physics at the carrier level becomes more clear, in contrast to the electric field equations that are more black box expansions of the electrical field. We also point out that by comparing our results to a very similar model and widely used model G. Genty, P. Kinsler, B. Kibler, and J. M. Dudley, Nonlinear envelope equation modeling of subcycle dynamics and harmonic generation in nonlinear waveguides, Opt. Express 15, 53825387 2007, the Raman terms presented there will most likely lead to an artificially lower Raman effect.
Composing graphical models with neural networks for structured representations and fast inference ; We propose a general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths. Our model family augments graphical structure in latent variables with neural network observation models. For inference, we extend variational autoencoders to use graphical model approximating distributions with recognition networks that output conjugate potentials. All components of these models are learned simultaneously with a single objective, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick. We illustrate this framework with several example models and an application to mouse behavioral phenotyping.
Impacts of dark energy on weighing neutrinos mass hierarchies considered ; Taking into account the mass splittings between three active neutrinos, we investigate impacts of dark energy on constraining the total neutrino mass sum mnu by using recent cosmological observations. We consider two typical dark energy models, namely, the wCDM model and the holographic dark energy HDE model, which both have an additional free parameter compared with the LambdaCDM model. We employ the Planck 2015 data of CMB temperature and polarization anisotropies, combined with lowredshift measurements on BAO distance scales, type Ia supernovae, Hubble constant, and Planck lensing. Compared to the LambdaCDM model, our study shows that the upper limit on sum mnu becomes much looser in the wCDM model while much tighter in the HDE model. In the HDE model, we obtain the 95 CL upper limit sum mnu0.105textrmeV for three degenerate neutrinos. This might be the most stringent constraint on sum mnu by far and is almost on the verge of diagnosing the neutrino mass hierachies in the HDE model. However, the difference of chi2 is still not significant enough to distinguish the neutrino mass hierarchies, even though the minimal chi2 of the normal hierarchy is slightly smaller than that of the inverted hierarchy.
Evolutionary model discovery of causal factors behind the socioagricultural behavior of the ancestral Pueblo ; Agentbased modeling of artificial societies offers a platform to test humaninterpretable, causal explanations of human behavior that generate societyscale phenomena. However, parameter calibration is insufficient to conduct an adequate datadriven exploration of the importance of causal factors that constitute agent rules, resulting in models with limited causal accuracy and robustness. We introduce evolutionary model discovery, a framework that combines genetic programming and random forest regression to evaluate the importance of a set of causal factors hypothesized to affect the individual's decisionmaking process. We investigated the farm plot seeking behavior of the ancestral Pueblo of the Long House Valley simulated in the Artificial Anasazi model our proposed framework. We evaluated the importance of causal factors not considered in the original model that we hypothesized to have affected the decisionmaking process. Contrary to the original model, where closeness was the sole factor driving farm plot selection, selection of higher quality land and desire for social presence are shown to be more important. In fact, model performance is improved when agents select farm plots further away from their failed farm plot. Farm selection strategies designed using these insights into the socioagricultural behavior of the ancestral Pueblo significantly improved the model's accuracy and robustness.
Planck 2015 constraints on spatiallyflat dynamical dark energy models ; We determine constraints on spatiallyflat tilted dynamical dark energy XCDM and phiCDM inflation models by analyzing Planck 2015 cosmic microwave background CMB anisotropy data and baryon acoustic oscillation BAO distance measurements. XCDM is a simple and widely used but physically inconsistent parameterization of dynamical dark energy, while the phiCDM model is a physically consistent one in which a scalar field phi with an inverse powerlaw potential energy density powers the currently accelerating cosmological expansion. Both these models have one additional parameter compared to standard LambdaCDM and both better fit the TT lowP lensing BAO data than does the standard tilted flatLambdaCDM model, with Delta chi2 1.26 1.60 for the XCDM phiCDM model relative to the LambdaCDM model. While this is a 1.1sigma 1.3sigma improvement over standard LambdaCDM and so not significant, dynamical dark energy models cannot be ruled out. In addition, both dynamical dark energy models reduce the tension between the Planck 2015 CMB anisotropy and the weak lensing sigma8 constraints.
A quasiphysical dynamic reduced order model for thermospheric mass density via Hermitian Space Dynamic Mode Decomposition ; Thermospheric mass density is a major driver of satellite drag, the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit LEO pertinent to space situational awareness. Most existing models for thermosphere are either physicsbased or empirical. Physicsbased models offer the potential for good predictiveforecast capabilities but require dedicated parallel resources for realtime evaluation and data assimilative capabilities that have yet to be developed. Empirical models are fast to evaluate, but offer very limited forecasting abilities. This paper presents a methodology of developing a reducedorder dynamic model from highdimensional physicsbased models by capturing the underlying dynamical behavior. This work develops a quasiphysical reduced order model ROM for thermospheric mass density using simulated output from NCAR's ThermosphereIonosphereElectrodynamics General Circular Model TIEGCM. The ROM is derived using a dynamic system formulation from a large dataset of TIEGCM simulations spanning 12 years and covering a complete solar cycle. Towards this end, a new reduced order modeling approach, based on Dynamic Mode Decomposition with control DMDc, that uses the Hermitian space of the problem to derive the dynamics and input matrices in a tractable manner is developed. Results show that the ROM performs well in serving as a reduced order surrogate for TIEGCM while almost always maintaining the forecast error to within 5 of the simulated densities after 24 hours.
Reference Model of MultiEntity Bayesian Networks for Predictive Situation Awareness ; During the past quartercentury, situation awareness SAW has become a critical research theme, because of its importance. Since the concept of SAW was first introduced during World War I, various versions of SAW have been researched and introduced. Predictive Situation Awareness PSAW focuses on the ability to predict aspects of a temporally evolving situation over time. PSAW requires a formal representation and a reasoning method using such a representation. A MultiEntity Bayesian Network MEBN is a knowledge representation formalism combining Bayesian Networks BN with FirstOrder Logic FOL. MEBN can be used to represent uncertain situations supported by BN as well as complex situations supported by FOL. Also, efficient reasoning algorithms for MEBN have been developed. MEBN can be a formal representation to support PSAW and has been used for several PSAW systems. Although several MEBN applications for PSAW exist, very little work can be found in the literature that attempts to generalize a MEBN model to support PSAW. In this research, we define a reference model for MEBN in PSAW, called a PSAWMEBN reference model. The PSAWMEBN reference model enables us to easily develop a MEBN model for PSAW by supporting the design of a MEBN model for PSAW. In this research, we introduce two example use cases using the PSAWMEBN reference model to develop MEBN models to support PSAW a Smart Manufacturing System and a Maritime Domain Awareness System.
Stochastic and deterministic modelling of cell migration ; Mathematical models are vital interpretive and predictive tools used to assist in the understanding of cell migration. There are typically two approaches to modelling cell migration either microscale, discrete or macroscale, continuum. The discrete approach, using agentbased models ABMs, is typically stochastic and accounts for properties at the cellscale. Conversely, the continuum approach, in which cell density is often modelled as a system of deterministic partial differential equations PDEs, provides a global description of the migration at the population level. Deterministic models have the advantage that they are generally more amenable to mathematical analysis. They can lead to significant insights for situations in which the system comprises a large number of cells, at which point simulating a stochastic ABM becomes computationally expensive. However, finding an appropriate continuum model to describe the collective behaviour of a system of individual cells can be a difficult task. Deterministic models are often specified on a phenomenological basis, which reduces their predictive power. Stochastic ABMs have advantages over their deterministic continuum counterparts. In particular, ABMs can represent individuallevel behaviours such as cell proliferation and cellcell interaction appropriately and are amenable to direct parameterisation using experimental data. It is essential, therefore, to establish direct connections between stochastic microscale behaviours and deterministic macroscale dynamics. In this Chapter we describe how, in some situations, these two distinct modelling approaches can be unified into a discretecontinuum equivalence framework. We provide an overview of some of the more recent advances in this field and we point out some of the relevant questions that remain unanswered.
Manifold A ModelAgnostic Framework for Interpretation and Diagnosis of Machine Learning Models ; Interpretation and diagnosis of machine learning models have gained renewed interest in recent years with breakthroughs in new approaches. We present Manifold, a framework that utilizes visual analysis techniques to support interpretation, debugging, and comparison of machine learning models in a more transparent and interactive manner. Conventional techniques usually focus on visualizing the internal logic of a specific model type i.e., deep neural networks, lacking the ability to extend to a more complex scenario where different model types are integrated. To this end, Manifold is designed as a generic framework that does not rely on or access the internal logic of the model and solely observes the input i.e., instances or features and the output i.e., the predicted result and probability distribution. We describe the workflow of Manifold as an iterative process consisting of three major phases that are commonly involved in the model development and diagnosis process inspection hypothesis, explanation reasoning, and refinement verification. The visual components supporting these tasks include a scatterplotbased visual summary that overviews the models' outcome and a customizable tabular view that reveals feature discrimination. We demonstrate current applications of the framework on the classification and regression tasks and discuss other potential machine learning use scenarios where Manifold can be applied.
TASI Lectures on Large N Tensor Models ; The first part of these lecture notes is mostly devoted to a comparative discussion of the three basic large N limits, which apply to fields which are vectors, matrices, or tensors of rank three and higher. After a brief review of some physical applications of large N limits, we present a few solvable examples in zero spacetime dimension. Using models with fields in the fundamental representation of ON, ON2, or ON3 symmetry, we compare their combinatorial properties and highlight a competition between the snail and melon diagrams. We exhibit the different methods used for solving the vector, matrix, and tensor large N limits. In the latter example we review how the dominance of melonic diagrams follows when a special tetrahedral interaction is introduced. The second part of the lectures is mostly about the fermionic quantum mechanical tensor models, whose large N limits are similar to that in the SachdevYeKitaev SYK model. The minimal Majorana model with ON3 symmetry and the tetrahedral Hamiltonian is reviewed in some detail; it is the closest tensor counterpart of the SYK model. Also reviewed are generalizations to complex fermionic tensors, including a model with SUN2times ONtimes U1 symmetry, which is a tensor counterpart of the complex SYK model. The bosonic large N tensor models, which are formally tractable in continuous spacetime dimension, are reviewed briefly at the end.
LIT Blockwise Intermediate Representation Training for Model Compression ; Knowledge distillation KD is a popular method for reducing the computational overhead of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model. Hint training i.e., FitNets extends KD by regressing a student model's intermediate representation to a teacher model's intermediate representation. In this work, we introduce bLockwise Intermediate representation Training LIT, a novel model compression technique that extends the use of intermediate representations in deep network compression, outperforming KD and hint training. LIT has two key ideas 1 LIT trains a student of the same width but shallower depth as the teacher by directly comparing the intermediate representations, and 2 LIT uses the intermediate representation from the previous block in the teacher model as an input to the current student block during training, avoiding unstable intermediate representations in the student network. We show that LIT provides substantial reductions in network depth without loss in accuracy for example, LIT can compress a ResNeXt110 to a ResNeXt20 5.5x on CIFAR10 and a VDCNN29 to a VDCNN9 3.2x on Amazon Reviews without loss in accuracy, outperforming KD and hint training in network size for a given accuracy. We also show that applying LIT to identical studentteacher architectures increases the accuracy of the student model above the teacher model, outperforming the recentlyproposed Born Again Networks procedure on ResNet, ResNeXt, and VDCNN. Finally, we show that LIT can effectively compress GAN generators, which are not supported in the KD framework because GANs output pixels as opposed to probabilities.
Novel deep learning methods for track reconstruction ; For the past year, the HEP.TrkX project has been investigating machine learning solutions to LHC particle track reconstruction problems. A variety of models were studied that drew inspiration from computer vision applications and operated on an imagelike representation of tracking detector data. While these approaches have shown some promise, imagebased methods face challenges in scaling up to realistic HLLHC data due to high dimensionality and sparsity. In contrast, models that can operate on the spacepoint representation of track measurements hits can exploit the structure of the data to solve tasks efficiently. In this paper we will show two sets of new deep learning models for reconstructing tracks using spacepoint data arranged as sequences or connected graphs. In the first set of models, Recurrent Neural Networks RNNs are used to extrapolate, build, and evaluate track candidates akin to Kalman Filter algorithms. Such models can express their own uncertainty when trained with an appropriate likelihood loss function. The second set of models use Graph Neural Networks GNNs for the tasks of hit classification and segment classification. These models read a graph of connected hits and compute features on the nodes and edges. They adaptively learn which hit connections are important and which are spurious. The models are scaleable with simple architecture and relatively few parameters. Results for all models will be presented on ACTS generic detector simulated data.
Flow Network Models for Online Scheduling Realtime Tasks on Multiprocessors ; We consider the flow network model to solve the multiprocessor realtime task scheduling problems. Using the flow network model or its generic form, linear programming LP formulation, for the problems is not new. However, the previous works have limitations, for example, that they are classified as offline scheduling techniques since they establish a flow network model or an LP problem considering a very long time interval. In this study, we propose how to construct the flow network model for online scheduling periodic realtime tasks on multiprocessors. Our key idea is to construct the flow network only for the active instances of tasks at the current scheduling time, while guaranteeing the existence of an optimal schedule for the future instances of the tasks. The optimal scheduling is here defined to ensure that all realtime tasks meet their deadlines when the total utilization demand of the given tasks does not exceed the total processing capacity. We then propose the flow network modelbased polynomialtime scheduling algorithms. Advantageously, the flow network model allows the task workload to be collected unfairly within a certain time interval without losing the optimality. It thus leads us to designing three unfairbutoptimal scheduling algorithms on both continuous and discretetime models. Especially, our unfairbutoptimal scheduling algorithm on a discretetime model is, to the best of our knowledge, the first in the problem domain. We experimentally demonstrate that it significantly alleviates the scheduling overheads, i.e., the reduced number of preemptions with the comparable number of task migrations across processors.
Astrochemical Kinetic Grid Models of Groups of Observed Molecular Abundances Taurus Molecular Cloud 1 TMC1 ; The emission line spectra of cyanoacetylene and methanol reveal chemical and physical heterogeneity on very small 0.1 pc scales toward the peak in cyanopolyyne emission in the Taurus Molecular Cloud, TMC1 CP. We generate grids of homogeneous chemical models using a threephase rate equation approach to obtain all timedependent abundances spanning the physical conditions determined from molecular tracers of compact and extended regions of emission along this line of sight. Each timedependent abundance is characterized by one of four features a maximumminimum, a monotonic increasedecrease, oscillatory behavior, or inertness. We similarly classify the timedependent agreement between modeled and observed abundances by calculating both the rootmeansquare logarithm difference and rootmeansquare deviation between the modeled and observed abundances at every point in our grid models for three groups of molecules i a composite group of all species present in both the observations and our chemical network G, ii the cyanopolyynes C HC3N, HC5N, HC7N, HC9N, and iii the oxygencontaining organic species methanol and acetaldehyde S CH3OH, CH3CHO. We discuss how the Bayesian uncertainties in the observed abundances constrain solutions within the grids of chemical models. The calculated best fit times at each grid point for each group are tabulated to reveal the minimum solution space of the grid models and the effects the Bayesian uncertainties have on the grid model solutions. The results of this approach separate the effect different physical conditions and modelfree parameters have on reproducing accurately the abundances of different groups of observed molecular species.
Parametric model order reduction and its application to inverse analysis of large nonlinear coupled cardiac problems ; Predictive highfidelity finite element simulations of human cardiac mechanics commonly require a large number of structural degrees of freedom. Additionally, these models are often coupled with lumpedparameter models of hemodynamics. High computational demands, however, slow down model calibration and therefore limit the use of cardiac simulations in clinical practice. As cardiac models rely on several patientspecific parameters, just one solution corresponding to one specific parameter set does not at all meet clinical demands. Moreover, while solving the nonlinear problem, 90 of the computation time is spent solving linear systems of equations. We propose a novel approach to reduce only the structural dimension of the monolithically coupled structurewindkessel system by projection onto a lowerdimensional subspace. We obtain a good approximation of the displacement field as well as of key scalar cardiac outputs even with very few reduced degrees of freedom while achieving considerable speedups. For subspace generation, we use proper orthogonal decomposition of displacement snapshots. To incorporate changes in the parameter set into our reduced order model, we provide a comparison of subspace interpolation methods. We further show how projectionbased model order reduction can be easily integrated into a gradientbased optimization and demonstrate its performance in a realworld multivariate inverse analysis scenario. Using the presented projectionbased model order reduction approach can significantly speed up model personalization and could be used for manyquery tasks in a clinical setting.
Online Model Distillation for Efficient Video Inference ; Highquality computer vision models typically address the problem of understanding the general distribution of realworld images. However, most cameras observe only a very small fraction of this distribution. This offers the possibility of achieving more efficient inference by specializing compact, lowcost models to the specific distribution of frames observed by a single camera. In this paper, we employ the technique of model distillation supervising a lowcost student model using the output of a highcost teacher to specialize accurate, lowcost semantic segmentation models to a target video stream. Rather than learn a specialized student model on offline data from the video stream, we train the student in an online fashion on the live video, intermittently running the teacher to provide a target for learning. Online model distillation yields semantic segmentation models that closely approximate their Mask RCNN teacher with 7 to 17times lower inference runtime cost 11 to 26times in FLOPs, even when the target video's distribution is nonstationary. Our method requires no offline pretraining on the target video stream, achieves higher accuracy and lower cost than solutions based on flow or video object segmentation, and can exhibit better temporal stability than the original teacher. We also provide a new video dataset for evaluating the efficiency of inference over long running video streams.
PROPS Probabilistic personalization of blackbox sequence models ; We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pretrained black box models such as recurrent neural networks. The technique pins the blackbox prediction functions to source nodes of a hidden Markov model HMM, and uses the remaining nodes as perturbation nodes for learning customized perturbations around those predictions. In this paper, we describe the PROPS model, provide an algorithm for online learning of its parameters, and demonstrate the consistency of this estimation. We also explore the utility of PROPS in the context of personalized language modeling. In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2.5 million articles around 6.6 billion words, and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump's tweeting. We achieved good customization after only 2,000 additional words, and find that the PROPS model, being fully probabilistic, provides insight into when President Trump's speech departs from generic patterns in the Wikipedia corpus. Python code for both the PROPS training algorithm as well as experiment reproducibility is available at httpsgithub.comcylanceperturbedsequencemodel.
Adaptivetomodel hybrid of tests for regressions ; In model checking for regressions, nonparametric estimationbased tests usually have tractable limiting null distributions and are sensitive to oscillating alternative models, but suffer from the curse of dimensionality. In contrast, empirical processbased tests can, at the fastest possible rate, detect local alternatives distinct from the null model, but is less sensitive to oscillating alternative models and with intractable limiting null distributions. It has long been an issue on how to construct a test that can fully inherit the merits of these two types of tests and avoid the shortcomings. We in this paper propose a generic adaptivetomodel hybrid of moment and conditional momentbased test to achieve this goal. Further, a significant feature of the method is to make nonparametric estimationbased tests, under the alternatives, also share the merits of existing empirical processbased tests. This methodology can be readily applied to other kinds of data and constructing other hybrids. As a byproduct in sufficient dimension reduction field, the estimation of residualrelated central subspace is used to indicate the underlying models for model adaptation. A systematic study is devoted to showing when alternative models can be indicated and when cannot. This estimation is of its own interest and can be applied to the problems with other kinds of data. Numerical studies are conducted to verify the powerfulness of the proposed test.
Bayesian Allocation Model Inference by Sequential Monte Carlo for Nonnegative Tensor Factorizations and Topic Models using Polya Urns ; We introduce a dynamic generative model, Bayesian allocation model BAM, which establishes explicit connections between nonnegative tensor factorization NTF, graphical models of discrete probability distributions and their Bayesian extensions, and the topic models such as the latent Dirichlet allocation. BAM is based on a Poisson process, whose events are marked by using a Bayesian network, where the conditional probability tables of this network are then integrated out analytically. We show that the resulting marginal process turns out to be a Polya urn, an integer valued selfreinforcing process. This urn processes, which we name a PolyaBayes process, obey certain conditional independence properties that provide further insight about the nature of NTF. These insights also let us develop space efficient simulation algorithms that respect the potential sparsity of data we propose a class of sequential importance sampling algorithms for computing NTF and approximating their marginal likelihood, which would be useful for model selection. The resulting methods can also be viewed as a model scoring method for topic models and discrete Bayesian networks with hidden variables. The new algorithms have favourable properties in the sparse data regime when contrasted with variational algorithms that become more accurate when the total sum of the elements of the observed tensor goes to infinity. We illustrate the performance on several examples and numerically study the behaviour of the algorithms for various data regimes.
Adaptive Ensemble Learning of Spatiotemporal Processes with Calibrated Predictive Uncertainty A Bayesian Nonparametric Approach ; Ensemble learning is a mainstay in modern data science practice. Conventional ensemble algorithms assign to base models a set of deterministic, constant model weights that 1 do not fully account for individual models' varying accuracy across data subgroups, nor 2 provide uncertainty estimates for the ensemble prediction. These shortcomings can yield predictions that are precise but biased, which can negatively impact the performance of the algorithm in realword applications. In this work, we present an adaptive, probabilistic approach to ensemble learning using a transformed Gaussian process as a prior for the ensemble weights. Given input features, our method optimally combines base models based on their predictive accuracy in the feature space, and provides interpretable estimates of the uncertainty associated with both model selection, as reflected by the ensemble weights, and the overall ensemble predictions. Furthermore, to ensure that this quantification of the model uncertainty is accurate, we propose additional machinery to nonparametrically model the ensemble's predictive cumulative density function CDF so that it is consistent with the empirical distribution of the data. We apply the proposed method to data simulated from a nonlinear regression model, and to generate a spatial prediction model and associated prediction uncertainties for fine particle levels in eastern Massachusetts, USA.
Efficient quantum and simulated annealing of Potts models using a halfhot constraint ; The Potts model is a generalization of the Ising model with Q2 components. In the fully connected ferromagnetic Potts model, a firstorder phase transition is induced by varying thermal fluctuations. Therefore, the computational time required to obtain the ground states by simulated annealing exponentially increases with the system size. This study analytically confirms that the transverse magneticfield quantum annealing induces a firstorder phase transition. This result implies that quantum annealing does not exponentially accelerate the groundstate search of the ferromagnetic Potts model. To avoid the firstorder phase transition, we propose an iterative optimization method using a halfhot constraint that is applicable to both quantum and simulated annealing. In the limit of Q to infty, a saddle point equation under the halfhot constraint is identical to the equation describing the behavior of the fully connected ferromagnetic Ising model, thus confirming a secondorder phase transition. Furthermore, we verify the same relation between the fully connected Potts glass model and the SherringtonKirkpatrick model under assumptions of static approximation and replica symmetric solution. The proposed method is expected to obtain lowenergy states of the Potts models with high efficiency using Isingtype computers such as the DWave quantum annealer and the Fujitsu Digital Annealer.
Analyzing Learned Molecular Representations for Property Prediction ; Advancements in neural machinery have led to a wide range of algorithmic solutions for molecular property prediction. Two classes of models in particular have yielded promising results neural networks applied to computed molecular fingerprints or expertcrafted descriptors, and graph convolutional neural networks that construct a learned molecular representation by operating on the graph structure of the molecule. However, recent literature has yet to clearly determine which of these two methods is superior when generalizing to new chemical space. Furthermore, prior research has rarely examined these new models in industry research settings in comparison to existing employed models. In this paper, we benchmark models extensively on 19 public and 16 proprietary industrial datasets spanning a wide variety of chemical endpoints. In addition, we introduce a graph convolutional model that consistently matches or outperforms models using fixed molecular descriptors as well as previous graph neural architectures on both public and proprietary datasets. Our empirical findings indicate that while approaches based on these representations have yet to reach the level of experimental reproducibility, our proposed model nevertheless offers significant improvements over models currently used in industrial workflows.
beamModelTester software framework for testing radio telescope beams ; The flux, polarimetric and spectral response of phased array radio telescopes with no moving parts such as LOFAR is known to vary considerably with orientation of the source to the receivers. Calibration models exist for this dependency such as those that are used in the LOFAR pipeline. Presented here is a system for comparing the predicted outputs from any given model with the results of an observation. In this paper, a sample observation of a bright source, Cassiopeia A, is used to demonstrate the software in operation, by providing an observation and a model of that observation which can be compared with one another. The package presented here is flexible to allow it to be used with other models and sources. The system operates by first calculating the predictions of the model and the results of an observation of linear fluxes and Stokes parameters separately. The model and observed values are then joined using the variables common to both, time and frequency. Normalisation and RFI excision are carried out and the differences between the prediction and the observation are calculated. A wide selection of 2, 3 and 4dimensional plots are generated to illustrate the dependence of the model and the observation as well as the difference between them on independent parameters time, frequency, altitude and azimuth. Thus, beamModelTester provides a framework by which it is possible to calibrate and propose refinements to models and to compare models with one another.
Optimal Control for Chemotaxis Systems and AdjointBased Optimization with MultipleRelaxationTime Lattice Boltzmann Models ; This paper is devoted to continuous and discrete adjointbased optimization approaches for optimal control problems governed by an important class of Nonlinear Coupled Anisotropic ConvectionDiffusion Chemotaxistype System NCACDCS. This study is motivated by the fact that the considered complex systems with complex geometries appear in diverse biochemical, biological and biosocial criminology problems. To solve numerically the corresponding nonlinear optimization problems, the primal problem NCACDCS is discretised by a coupled Lattice Boltzmann Method with a general MultipleRelaxationTime collision operators MRT while for the adjoint problem, an Adjoint MultipleRelaxationTime lattice Boltzmann model AMRT is proposed and investigated. First, the optimal control problems are formulated and firstorder necessary optimality conditions are established by using sensitivity and adjoint calculus. The resulting problems are discretised by the coupled MRT and AMRT models and solved via gradient descent methods. First of all, an efficient and stable modified MRT model for NCACDCS is developed, and through the ChapmanEnskog analysis we show that NCACDCS can be correctly recovered from the proposed MRT model. For the adjoint problem, the discretisation strategy is based on AMRT model, which is found to be as simple as MRT model with also highlyefficient parallel nature. The derivation of AMRT model and the discrete cost functional gradient are derived mathematically in detail using the developed MRT model. The obtained method is reliable, efficient, practical to implement and can be easily incorporated into any existing MRT code.
Stochastic dynamical modeling of turbulent flows ; Advanced measurement techniques and high performance computing have made large data sets available for a wide range of turbulent flows that arise in engineering applications. Drawing on this abundance of data, dynamical models can be constructed to reproduce structural and statistical features of turbulent flows, opening the way to the design of effective modelbased flow control strategies. This review describes a framework for completing secondorder statistics of turbulent flows by models that are based on the NavierStokes equations linearized around the turbulent mean velocity. Systems theory and convex optimization are combined to address the inherent uncertainty in the dynamics and the statistics of the flow by seeking a suitable parsimonious correction to the prior linearized model. Specifically, dynamical couplings between states of the linearized model dictate structural constraints on the statistics of flow fluctuations. Thence, coloredintime stochastic forcing that drives the linearized model is sought to account for and reconcile dynamics with available data i.e., partially known second order statistics. The number of dynamical degrees of freedom that are directly affected by stochastic excitation is minimized as a measure of model parsimony. The spectral content of the resulting coloredintime stochastic contribution can alternatively be seen to arise from a lowrank structural perturbation of the linearized dynamical generator, pointing to suitable dynamical corrections that may account for the absence of the nonlinear interactions in the linearized model.
On perfectness in Gaussian graphical models ; Knowing when a graphical model is perfect to a distribution is essential in order to relate separation in the graph to conditional independence in the distribution, and this is particularly important when performing inference from data. When the model is perfect, there is a onetoone correspondence between conditional independence statements in the distribution and separation statements in the graph. Previous work has shown that almost all models based on linear directed acyclic graphs as well as Gaussian chain graphs are perfect, the latter of which subsumes Gaussian graphical models i.e., the undirected Gaussian models as a special case. However, the complexity of chain graph models leads to a proof of this result which is indirect and mired by the complications of parameterizing this general class. In this paper, we directly approach the problem of perfectness for the Gaussian graphical models, and provide a new proof, via a more transparent parametrization, that almost all such models are perfect. Our approach is based on, and substantially extends, a construction of Lnvenivcka and Mat'uvs showing the existence of a perfect Gaussian distribution for any graph.
Testing Deep Learning Models for Image Analysis Using ObjectRelevant Metamorphic Relations ; Deep learning models are widely used for image analysis. While they offer high performance in terms of accuracy, people are concerned about if these models inappropriately make inferences using irrelevant features that are not encoded from the target object in a given image. To address the concern, we propose a metamorphic testing approach that assesses if a given inference is made based on irrelevant features. Specifically, we propose two novel metamorphic relations to detect such inappropriate inferences. We applied our approach to 10 image classification models and 10 object detection models, with three large datasets, i.e., ImageNet, COCO, and Pascal VOC. Over 5.3 of the top5 correct predictions made by the image classification models are subject to inappropriate inferences using irrelevant features. The corresponding rate for the object detection models is over 8.5. Based on the findings, we further designed a new image generation strategy that can effectively attack existing models. Comparing with a baseline approach, our strategy can double the success rate of attacks.
POSITION PAPER Credibility of In Silico Trial Technologies A Theoretical Framing ; Different research communities have developed various approaches to assess the credibility of predictive models. Each approach usually works well for a specific type of model, and under some epistemic conditions that are normally satisfied within that specific research domain. Some regulatory agencies recently started to consider evidences of safety and efficacy on new medical products obtained using computer modelling and simulation which is referred to as In Silico Trials; this has raised the attention in the computational medicine research community on the regulatory science aspects of this emerging discipline. But this poses a foundational problem in the domain of biomedical research the use of computer modelling is relatively recent, without a widely accepted epistemic framing for problem of model credibility. Also, because of the inherent complexity of living organisms, biomedical modellers tend to use a variety of modelling methods, sometimes mixing them in the solution of a single problem. In such context merely adopting credibility approaches developed within other research community might not be appropriate. In this position paper we propose a theoretical framing for the problem of assessing the credibility of a predictive models for In Silico Trials, which accounts for the epistemic specificity of this research field and is general enough to be used for different type of models.
An adaptive voter model on simplicial complexes ; Collective decision making processes lie at the heart of many social, political and economic challenges. The classical voter model is a wellestablished conceptual model to study such processes. In this work, we define a new form of adaptive or coevolutionary voter model posed on a simplicial complex, i.e., on a certain class of hypernetworkshypergraphs. We use the persuasion rule along edges of the classical voter model and the recently studied rewiring rule of edges towards likeminded nodes, and introduce a new peer pressure rule applied to three nodes connected via a 2simplex. This simplicial adaptive voter model is studied via numerical simulation. We show that adding the effect of peer pressure to an adaptive voter model leaves its fragmentation transition, i.e., the transition upon varying the rewiring rate from a single majority state into to a fragmented state of two different opinion subgraphs, intact. Yet, above and below the fragmentation transition, we observe that the peer pressure has substantial quantitative effects. It accelerates the transition to a singleopinion state below the transition and also speeds up the system dynamics towards fragmentation above the transition. Furthermore, we quantify that there is a multiscale hierarchy in the model leading to the depletion of 2simplices, before the depletion of active edges. This leads to the conjecture that many other dynamic network models on simplicial complexes may show a similar behaviour with respect to the sequential evolution of simplicies of different dimensions.
6DLS Modeling Nonplanar Frictional Surface Contacts for Grasping using 6D Limit Surfaces ; Robot grasping with deformable gripper jaws results in nonplanar surface contacts if the jaws deform to the nonplanar local geometry of an object. The frictional force and torque that can be transmitted through a nonplanar surface contact are both threedimensional, resulting in a sixdimensional frictional wrench 6DFW. Applying traditional planar contact models to such contacts leads to overconservative results as the models do not consider the nonplanar surface geometry and only compute a threedimensional subset of the 6DFW. To address this issue, we derive the 6DFW for nonplanar surfaces by combining concepts of differential geometry and Coulomb friction. We also propose two 6D limit surface 6DLS models, generalized from wellknown threedimensional LS 3DLS models, which describe the frictionmotion constraints for a contact. We evaluate the 6DLS models by fitting them to the 6DFW samples obtained from six parametric surfaces and 2,932 meshed contacts from finite element method simulations of 24 rigid objects. We further present an algorithm to predict multicontact grasp success by building a grasp wrench space with the 6DLS model of each contact. To evaluate the algorithm, we collected 1,035 physical grasps of ten 3Dprinted objects with a KUKA robot and a deformable paralleljaw gripper. In our experiments, the algorithm achieves 66.8 precision, a metric inversely related to false positive predictions, and 76.9 recall, a metric inversely related to false negative predictions. The 6DLS models increase recall by up to 26.1 over 3DLS models with similar precision.
Transient Dynamics of Infection Transmission in a Simulated Intensive Care Unit ; Healthcareassociated infections HAIs remain a public health problem. Previous work showed intensive care unit ICU population structure impacts methicillinresistant Staphylococcus aureus MRSA rates. Unexplored in that work was the transient dynamics of this system. We consider the dynamics of MRSA in an ICU in three different models 1 a RossMcDonald model with a single healthcare staff type, 2 a RossMcDonald model with nurses and doctors considered as separate populations and 3 a metapopulation model that segments patients into smaller groups seen by a single nurse. The basic reproduction number, R0 is derived using the Next Generation Matrix method, while the importance of the position of patients within the metapopulation model is assessed via stochastic simulation. The singlestaff model had an R0 of 0.337, while the other two models had R0s of 0.278. The metapopulation model's R0 was not sensitive to the time nurses spent with their assigned patients vs. unassigned patients. This suggests previous results showing that simulated infection rates are dependent on this parameter are the result of differences in the transient dynamics between the models, rather than differing longterm equilibria.
Domain Expansion in DNNbased Acoustic Models for Robust Speech Recognition ; Training acoustic models with sequentially incoming data while both leveraging new data and avoiding the forgetting effect is an essential obstacle to achieving human intelligence level in speech recognition. An obvious approach to leverage data from a new domain e.g., new accented speech is to first generate a comprehensive dataset of all domains, by combining all available data, and then use this dataset to retrain the acoustic models. However, as the amount of training data grows, storing and retraining on such a largescale dataset becomes practically impossible. To deal with this problem, in this study, we study several domain expansion techniques which exploit only the data of the new domain to build a stronger model for all domains. These techniques are aimed at learning the new domain with a minimal forgetting effect i.e., they maintain original model performance. These techniques modify the adaptation procedure by imposing new constraints including 1 weight constraint adaptation WCA keeping the model parameters close to the original model parameters; 2 elastic weight consolidation EWC slowing down training for parameters that are important for previously established domains; 3 soft KLdivergence SKLD restricting the KLdivergence between the original and the adapted model output distributions; and 4 hybrid SKLDEWC incorporating both SKLD and EWC constraints. We evaluate these techniques in an accent adaptation task in which we adapt a deep neural network DNN acoustic model trained with native English to three different English accents Australian, Hispanic, and Indian. The experimental results show that SKLD significantly outperforms EWC, and EWC works better than WCA. The hybrid SKLDEWC technique results in the best overall performance.
A Nonparametric Bayesian Framework for Uncertainty Quantification in Stochastic Simulation ; When we use simulation to assess the performance of stochastic systems, the input models used to drive simulation experiments are often estimated from finite realworld data. There exist both input model and simulation estimation uncertainties in the system performance estimates. Without strong prior information on the input models and the system mean response surface, in this paper, we propose a Bayesian nonparametric framework to quantify the impact from both sources of uncertainty. Specifically, since the realworld data often represent the variability caused by various latent sources of uncertainty, Dirichlet Processes Mixtures DPM based nonparametric input models are introduced to model a mixture of heterogeneous distributions, which can faithfully capture the important features of realworld data, such as multimodality and skewness. Bayesian posteriors of flexible input models characterize the input model estimation uncertainty, which automatically accounts for both model selection and parameter value uncertainty. Then, input model estimation uncertainty is propagated to outputs by using direct simulation. Thus, under very general conditions, our framework delivers an empirical credible interval accounting for both input and simulation uncertainties. A variance decomposition is further developed to quantify the relative contributions from both sources of uncertainty. Our approach is supported by rigorous theoretical and empirical study.
Application of a new information priority accumulated grey model with time power to predict shortterm wind turbine capacity ; Wind energy makes a significant contribution to global power generation. Predicting wind turbine capacity is becoming increasingly crucial for cleaner production. For this purpose, a new information priority accumulated grey model with time power is proposed to predict shortterm wind turbine capacity. Firstly, the computational formulas for the time response sequence and the prediction values are deduced by grey modeling technique and the definite integral trapezoidal approximation formula. Secondly, an intelligent algorithm based on particle swarm optimization is applied to determine the optimal nonlinear parameters of the novel model. Thirdly, three real numerical examples are given to examine the accuracy of the new model by comparing with six existing prediction models. Finally, based on the wind turbine capacity from 2007 to 2017, the proposed model is established to predict the total wind turbine capacity in Europe, North America, Asia, and the world. The numerical results reveal that the novel model is superior to other forecasting models. It has a great advantage for small samples with new characteristic behaviors. Besides, reasonable suggestions are put forward from the standpoint of the practitioners and governments, which has high potential to advance the sustainable improvement of clean energy production in the future.
A Transformation Perspective on Marginal and Conditional Models ; Clustered observations are ubiquitous in controlled and observational studies and arise naturally in multicentre trials or longitudinal surveys. We present a novel model for the analysis of clustered observations where the marginal distributions are described by a linear transformation model and the correlations by a joint multivariate normal distribution. The joint model provides an analytic formula for the marginal distribution. Owing to the richness of transformation models, the techniques are applicable to any type of response variable, including bounded, skewed, binary, ordinal, or survival responses. We demonstrate how the common normal assumption for reaction times can be relaxed in the sleep deprivation benchmark dataset and report marginal odds ratios for the notoriously difficult toe nail data. We furthermore discuss the analysis of two clinical trials aiming at the estimation of marginal treatment effects. In the first trial, pain was repeatedly assessed on a bounded visual analog scale and marginal proportionalodds models are presented. The second trial reported diseasefree survival in rectal cancer patients, where the marginal hazard ratio from Weibull and Cox models is of special interest. An empirical evaluation compares the performance of the novel approach to general estimation equations for binary responses and to conditional mixedeffects models for continuous responses. An implementation is available in the tram addon package to the R system and was benchmarked against established models in the literature.
Reduceddimensional Monte Carlo Maximum Likelihood for Latent Gaussian Random Field Models ; Monte Carlo maximum likelihood MCML provides an elegant approach to find maximum likelihood estimators MLEs for latent variable models. However, MCML algorithms are computationally expensive when the latent variables are highdimensional and correlated, as is the case for latent Gaussian random field models. Latent Gaussian random field models are widely used, for example in building flexible regression models and in the interpolation of spatially dependent data in many research areas such as analyzing count data in disease modeling and presenceabsence satellite images of ice sheets. We propose a computationally efficient MCML algorithm by using a projectionbased approach to reduce the dimensions of the random effects. We develop an iterative method for finding an effective importance function; this is generally a challenging problem and is crucial for the MCML algorithm to be computationally feasible. We find that our method is applicable to both continuous latent Gaussian process and discrete domain latent Gaussian Markov random field models. We illustrate the application of our methods to challenging simulated and real data examples for which maximum likelihood estimation would otherwise be very challenging. Furthermore, we study an often overlooked challenge in MCML approaches to latent variable models practical issues in calculating standard errors of the resulting estimates, and assessing whether resulting confidence intervals provide nominal coverage. Our study therefore provides useful insights into the details of implementing MCML algorithms for highdimensional latent variable models.
Unified model selection approach based on minimum description length principle in Granger causality analysis ; Granger causality analysis GCA provides a powerful tool for uncovering the patterns of brain connectivity mechanism using neuroimaging techniques. Conventional GCA applies two different mathematical theories in a twostage scheme 1 the Bayesian information criterion BIC or Akaike information criterion AIC for the regression model orders associated with endogenous and exogenous information; 2 Fstatistics for determining the causal effects of exogenous variables. While specifying endogenous and exogenous effects are essentially the same model selection problem, this could produce different benchmarks in the two stages and therefore degrade the performance of GCA. In this course, we present a unified model selection approach based on the minimum description length MDL principle for GCA in the context of the general regression model paradigm. Compared with conventional methods, our approach emphasize that a single mathematical theory should be held throughout the GCA process. Under this framework, all candidate models within the model space might be compared freely in the context of the code length, without the need for an intermediate model. We illustrate its advantages over conventional twostage GCA approach in a 3node network and a 5node network synthetic experiments. The unified model selection approach is capable of identifying the actual connectivity while avoiding the false influences of noise. More importantly, the proposed approach obtained more consistent results in a challenge fMRI dataset for causality investigation, mental calculation network under visual and auditory stimulus, respectively. The proposed approach has potential to accommodate other Granger causality representations in other function space. The comparison between different GC representations in different function spaces can also be naturally deal with in the framework.
Datadriven recovery of hidden physics in reduced order modeling of fluid flows ; In this article, we introduce a modular hybrid analysis and modeling HAM approach to account for hidden physics in reduced order modeling ROM of parameterized systems relevant to fluid dynamics. The hybrid ROM framework is based on using the first principles to model the known physics in conjunction with utilizing the datadriven machine learning tools to model remaining residual that is hidden in data. This framework employs proper orthogonal decomposition as a compression tool to construct orthonormal bases and Galerkin projection GP as a model to built the dynamical core of the system. Our proposed methodology hence compensates structural or epistemic uncertainties in models and utilizes the observed data snapshots to compute true modal coefficients spanned by these bases. The GP model is then corrected at every time step with a datadriven rectification using a long shortterm memory LSTM neural network architecture to incorporate hidden physics. A Grassmannian manifold approach is also adapted for interpolating basis functions to unseen parametric conditions. The control parameter governing the system's behavior is thus implicitly considered through true modal coefficients as input features to the LSTM network. The effectiveness of the HAM approach is discussed through illustrative examples that are generated synthetically to take hidden physics into account. Our approach thus provides insights addressing a fundamental limitation of the physicsbased models when the governing equations are incomplete to represent underlying physical processes.
Melanoma detection with electrical impedance spectroscopy and dermoscopy using joint deep learning models ; The initial assessment of skin lesions is typically based on dermoscopic images. As this is a difficult and timeconsuming task, machine learning methods using dermoscopic images have been proposed to assist human experts. Other approaches have studied electrical impedance spectroscopy EIS as a basis for clinical decision support systems. Both methods represent different ways of measuring skin lesion properties as dermoscopy relies on visible light and EIS uses electric currents. Thus, the two methods might carry complementary features for lesion classification. Therefore, we propose joint deep learning models considering both EIS and dermoscopy for melanoma detection. For this purpose, we first study machine learning methods for EIS that incorporate domain knowledge and previously used heuristics into the design process. As a result, we propose a recurrent model with statemaxpooling which automatically learns the relevance of different EIS measurements. Second, we combine this new model with different convolutional neural networks that process dermoscopic images. We study ensembling approaches and also propose a crossattention module guiding information exchange between the EIS and dermoscopy model. In general, combinations of EIS and dermoscopy clearly outperform models that only use either EIS or dermoscopy. We show that our attentionbased, combined model outperforms other models with specificities of 34.4 CI 31.338.4, 34.7 CI 31.038.8 and 53.7 CI 50.157.6 for dermoscopy, EIS and the combined model, respectively, at a clinically relevant sensitivity of 98.
Projecting FloodInducing Precipitation with a Bayesian Analogue Model ; The hazard of pluvial flooding is largely influenced by the spatial and temporal dependence characteristics of precipitation. When extreme precipitation possesses strong spatial dependence, the risk of flooding is amplified due to catchment factors that cause runoff accumulation such as topography. Temporal dependence can also increase flood risk as storm water drainage systems operating at capacity can be overwhelmed by heavy precipitation occurring over multiple days. While transformed Gaussian processes are common choices for modeling precipitation, their weak tail dependence may lead to underestimation of flood risk. Extreme value models such as the generalized Pareto processes for threshold exceedances and maxstable models are attractive alternatives, but are difficult to fit when the number of observation sites is large, and are of little use for modeling the bulk of the distribution, which may also be of interest to water management planners. While the atmospheric dynamics governing precipitation are complex and difficult to fully incorporate into a parsimonious statistical model, nonmechanistic analogue methods that approximate those dynamics have proven to be promising approaches to capturing the temporal dependence of precipitation. In this paper, we present a Bayesian analogue method that leverages large, synopticscale atmospheric patterns to make precipitation forecasts. Changing spatial dependence across varying intensities is modeled as a mixture of spatial Studentt processes that can accommodate both strong and weak tail dependence. The proposed model demonstrates improved performance at capturing the distribution of extreme precipitation over Community Atmosphere Model CAM 5.2 forecasts.
Variance partitioning in multilevel models for count data ; A first step when fitting multilevel models to continuous responses is to explore the degree of clustering in the data. Researchers fit variancecomponent models and then report the proportion of variation in the response that is due to systematic differences between clusters. Equally they report the response correlation between units within a cluster. These statistics are popularly referred to as variance partition coefficients VPCs and intraclass correlation coefficients ICCs. When fitting multilevel models to categorical binary, ordinal, or nominal and count responses, these statistics prove more challenging to calculate. For categorical response models, researchers appeal to their latent response formulations and report VPCsICCs in terms of latent continuous responses envisaged to underly the observed categorical responses. For standard count response models, however, there are no corresponding latent response formulations. More generally, there is a paucity of guidance on how to partition the variation. As a result, applied researchers are likely to avoid or inadequately report and discuss the substantive importance of clustering and cluster effects in their studies. A recent article drew attention to a littleknown exact algebraic expression for the VPCICC for the special case of the twolevel randomintercept Poisson model. In this article, we make a substantial new contribution. First, we derive exact VPCICC expressions for more flexible negative binomial models that allows for overdispersion, a phenomenon which often occurs in practice. Then we derive exact VPCICC expressions for threelevel and randomcoefficient extensions to these models. We illustrate our work with an application to student absenteeism.
Predicting overweight and obesity in later life from childhood data A review of predictive modeling approaches ; Background Overweight and obesity are an increasing phenomenon worldwide. Predicting future overweight or obesity early in the childhood reliably could enable a successful intervention by experts. While a lot of research has been done using explanatory modeling methods, capability of machine learning, and predictive modeling, in particular, remain mainly unexplored. In predictive modeling models are validated with previously unseen examples, giving a more accurate estimate of their performance and generalization ability in reallife scenarios. Objective To find and review existing overweight or obesity research from the perspective of employing childhood data and predictive modeling methods. Methods The initial phase included bibliographic searches using relevant search terms in PubMed, IEEE database and Google Scholar. The second phase consisted of iteratively searching references of potential studies and recent research that cite the potential studies. Results Eight research articles and three review articles were identified as relevant for this review. Conclusions Prediction models with high performance either have a relatively short time period to predict orand are based on late childhood data. Logistic regression is currently the most often used method in forming the prediction models. In addition to child's own weight and height information, maternal weight status or body mass index was often used as predictors in the models.
Adversarial Analysis of Natural Language Inference Systems ; The release of large natural language inference NLI datasets like SNLI and MNLI have led to rapid development and improvement of completely neural systems for the task. Most recently, heavily pretrained, Transformerbased models like BERT and MTDNN have reached nearhuman performance on these datasets. However, these standard datasets have been shown to contain many annotation artifacts, allowing models to shortcut understanding using simple fallible heuristics, and still perform well on the test set. So it is no surprise that many adversarial challenge datasets have been created that cause models trained on standard datasets to fail dramatically. Although extra training on this data generally improves model performance on just that type of data, transferring that learning to unseen examples is still partial at best. This work evaluates the failures of stateoftheart models on existing adversarial datasets that test different linguistic phenomena, and find that even though the models perform similarly on MNLI, they differ greatly in their robustness to these attacks. In particular, we find syntaxrelated attacks to be particularly effective across all models, so we provide a finegrained analysis and comparison of model performance on those examples. We draw conclusions about the value of model size and multitask learning beyond comparing their standard test set performance, and provide suggestions for more effective training data.
A Scale MixtureBased Stochastic Model of Surface EMG Signals With Variable Variances ; Objective Surface electromyogram EMG signals have typically been assumed to follow a Gaussian distribution. However, the presence of nonGaussian signals associated with muscle activity has been reported in recent studies, and there is no general model of the distribution of EMG signals that can explain both nonGaussian and Gaussian distributions within a unified scheme. Methods In this paper, we describe the formulation of a nonGaussian EMG model based on a scale mixture distribution. In the model, an EMG signal at a certain time follows a Gaussian distribution, and its variance is handled as a random variable that follows an inverse gamma distribution. Accordingly, the probability distribution of EMG signals is assumed to be a mixture of Gaussians with the same mean but different variances. The EMG variance distribution is estimated via marginal likelihood maximization. Results Experiments involving nine participants revealed that the proposed model provides a better fit to recorded EMG signals than conventional EMG models. It was also shown that variance distribution parameters may reflect underlying motor unit activity. Conclusion This study proposed a scale mixture distributionbased stochastic EMG model capable of representing changes in nonGaussianity associated with muscle activity. A series of experiments demonstrated the validity of the model and highlighted the relationship between the variance distribution and muscle force. Significance The proposed model helps to clarify conventional wisdom regarding the probability distribution of surface EMG signals within a unified scheme.
Active learning in the geometric block model ; The geometric block model is a recently proposed generative model for random graphs that is able to capture the inherent geometric properties of many community detection problems, providing more accurate characterizations of practical community structures compared with the popular stochastic block model. Galhotra et al. recently proposed a motifcounting algorithm for unsupervised community detection in the geometric block model that is proved to be nearoptimal. They also characterized the regimes of the model parameters for which the proposed algorithm can achieve exact recovery. In this work, we initiate the study of active learning in the geometric block model. That is, we are interested in the problem of exactly recovering the community structure of random graphs following the geometric block model under arbitrary model parameters, by possibly querying the labels of a limited number of chosen nodes. We propose two active learning algorithms that combine the idea of motifcounting with two different label query policies. Our main contribution is to show that sampling the labels of a vanishingly small fraction of nodes sublinear in the total number of nodes is sufficient to achieve exact recovery in the regimes under which the stateoftheart unsupervised method fails. We validate the superior performance of our algorithms via numerical simulations on both real and synthetic datasets.
A long shortterm memory embedding for hybrid uplifted reduced order models ; In this paper, we introduce an uplifted reduced order modeling UROM approach through the integration of standard projection based methods with long shortterm memory LSTM embedding. Our approach has three modeling layers or components. In the first layer, we utilize an intrusive projection approach to model dynamics represented by the largest modes. The second layer consists of an LSTM model to account for residuals beyond this truncation. This closure layer refers to the process of including the residual effect of the discarded modes into the dynamics of the largest scales. However, the feasibility of generating a low rank approximation tails off for higher Kolmogorov nwidth systems due to the underlying nonlinear processes. The third uplifting layer, called superresolution, addresses this limited representation issue by expanding the span into a larger number of modes utilizing the versatility of LSTM. Therefore, our model integrates a physicsbased projection model with a memory embedded LSTM closure and an LSTM based superresolution model. In several applications, we exploit the use of Grassmann manifold to construct UROM for unseen conditions. We performed numerical experiments by using the Burgers and NavierStokes equations with quadratic nonlinearity. Our results show robustness of the proposed approach in building reduced order models for parameterized systems and confirm the improved tradeoff between accuracy and efficiency.
Bayesian Shape Invariant Model for Latent Growth Curve with TimeInvariant Covariates ; In the attentiondeficit hyperactivity disorder ADHD study, children are prescribed different stimulant medications. The height measurements are recorded longitudinally along with the medication time. Differences among the patients are captured by the parameters suggested the Superimposition by Translation and Rotation SITAR model using three subjectspecific parameters to estimate their deviation from the mean growth curve. In this paper, we generalize the SITAR model in a Bayesian way with timeinvariant covariates. The timeinvariant model allows us to predict latent growth factors. Since patients suffer from a common disease, they usually exhibit a similar pattern, and it is natural to build a nonlinear model that is shaped invariant. The model is semiparametric, where the population time curve is modeled with a natural cubic spline. The original shape invariant growth curve model, motivated by epidemiological research on the evolution of pubertal heights over time, fits the underlying shape function for height over age and estimates subjectspecific deviations from this curve in terms of size, tempo, and velocity using maximum likelihood. The usefulness of the model is illustrated in the attention deficit hyperactivity disorder ADHD study. Further, we demonstrated the effect of stimulant medications on pubertal growth by gender.
Development, Demonstration, and Validation of Datadriven Compact Diode Models for Circuit Simulation and Analysis ; Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics eg, radiation effects into an existing compact model is not trivial and may require redevelopment from scratch. Machine Learning ML techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options 1 tablebased interpolation, 2Generalized Moving LeastSquares, and 3 feedforward Deep Neural Networks, to develop compact models for a pn junction diode. We evaluate the performance of these datadriven compact models by 1 comparing their voltagecurrent characteristics against laboratory data, and 2 building a bridge rectifier circuit using these devices, predicting the circuit's behavior using SPICElike circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.
Projection based Active Gaussian Process Regression for Pareto Front Modeling ; Pareto Front PF modeling is essential in decision making problems across all domains such as economics, medicine or engineering. In Operation Research literature, this task has been addressed based on multiobjective optimization algorithms. However, without learning models for PF, these methods cannot examine whether a new provided point locates on PF or not. In this paper, we reconsider the task from Data Mining perspective. A novel projection based active Gaussian process regression P aGPR method is proposed for efficient PF modeling. First, P aGPR chooses a series of projection spaces with dimensionalities ranking from low to high. Next, in each projection space, a Gaussian process regression GPR model is trained to represent the constraint that PF should satisfy in that space. Moreover, in order to improve modeling efficacy and stability, an active learning framework has been developed by exploiting the uncertainty information obtained in the GPR models. Different from all existing methods, our proposed PaGPR method can not only provide a generative PF model, but also fast examine whether a provided point locates on PF or not. The numerical results demonstrate that compared to stateoftheart passive learning methods the proposed PaGPR method can achieve higher modeling accuracy and stability.
Nonlinear matter power spectrum without screening dynamics modelling in fR gravity ; Halo model is a physically intuitive method for modelling the nonlinear power spectrum, especially for the alternatives to the standard LambdaCDM models. In this paper, we exam the ShethTormen barrier formula adopted in the previous textttCHAM method citep2018MNRAS.476L..65H. As an example, we model the ellipsoidal collapse of tophat dark matter haloes in fR gravity. A good agreement between ShethTormen formula and our result is achieved. The relative difference in the ellipsoidal collapse barrier is less than or equal to 1.6. Furthermore, we verify that, for F4 and F5 cases of HuSawicki fR gravity, the screening mechanism do not play a crucial role in the nonlinear power spectrum modelling up to ksim1hrm Mpc. We compare two versions of modified gravity modelling, namely withwithout screening. We find that by treating the effective Newton constant as constant number Grm eff43GN is acceptable. The scale dependence of the gravitational coupling is subrelevant. The resulting spectra in F4 and F5, are in 0.1 agreement with the previous textttCHAM results. The published code is accelerated significantly. Finally, we compare our halo model prediction with Nbody simulation. We find that the general spectrum profile agree, qualitatively. However, via the halo model approach, there exists a systematic underestimation of the matter power spectrum in the comoving wavenumber range between 0.3 hrm Mpc and 3 hrm Mpc. These scales are overlapping with the transition scales from two halo term dominated regimes to those of one halo term dominated.
Time and frequencylimited H2optimal model order reduction of bilinear control systems ; In the time and frequencylimited model order reduction, a reducedorder approximation of the original highorder model is sought to ensure superior accuracy in some desired time and frequency intervals. We first consider the timelimited H2optimal model order reduction problem for bilinear control systems and derive firstorder optimality conditions that a local optimum reducedorder model should satisfy. We then propose a heuristic algorithm that generates a reducedorder model, which tends to achieve these optimality conditions. The frequencylimited and the timelimited H2pseudooptimal model reduction problems are also considered wherein we restrict our focus on constructing a reducedorder model that satisfies a subset of the respective optimality conditions for the local optimum. Two new algorithms have been proposed that enforce two out of four optimality conditions on the reducedorder model upon convergence. The algorithms are tested on three numerical examples to validate the theoretical results presented in the paper. The numerical results confirm the efficacy of the proposed algorithms.
WRF Simulation, Model Sensitivity, and Analysis of the December 2013 New England Ice Storm ; Ice storms pose significant damage risk to electric utility infrastructure. In an attempt to improve storm response and minimize costs, energy companies have supported the development of ice accretion forecasting techniques utilizing meteorological output from numerical weather prediction NWP models. The majority of scientific literature in this area focuses on the application of NWP models, such as the Weather Research and Forecasting WRF model, to ice storm case studies, but such analyses tend to provide little verification of output fidelity prior to use. This study evaluates the performance of WRF in depicting the 2123 December 2013 New England ice storm at the surface and in vertical profile. A series of sensitivity tests are run using eight planetary boundary layer PBL physics parameterizations, three reanalysis datasets, two vertical level configurations, and with and without grid nudging. Simulated values of precipitation, temperature, wind speed, and wind direction are validated against surface and radiosonde observations at several station locations across northeastern U.S. and southeastern Canada. The results show that, while the spatially and temporally averaged statistics for nearsurface variables are consistent with those of select icestorm case studies, nearsurface variables are highly sensitive to model when examined at the station level. No single model configuration produces the most robust solution for all variables or station locations, although one scheme generally yields model output with the least realism. In all, we find that careful model sensitivity testing and extensive validation are necessary components for minimizing modelbased biases in simulations of ice storms.
Routine pattern discovery and anomaly detection in individual travel behavior ; Discovering patterns and detecting anomalies in individual travel behavior is a crucial problem in both research and practice. In this paper, we address this problem by building a probabilistic framework to model individual spatiotemporal travel behavior data e.g., trip records and trajectory data. We develop a twodimensional latent Dirichlet allocation LDA model to characterize the generative mechanism of spatiotemporal trip records of each traveler. This model introduces two separate factor matrices for the spatial dimension and the temporal dimension, respectively, and use a twodimensional core structure at the individual level to effectively model the joint interactions and complex dependencies. This model can efficiently summarize travel behavior patterns on both spatial and temporal dimensions from very sparse trip sequences in an unsupervised way. In this way, complex travel behavior can be modeled as a mixture of representative and interpretable spatiotemporal patterns. By applying the trained model on futureunseen spatiotemporal records of a traveler, we can detect her behavior anomalies by scoring those observations using perplexity. We demonstrate the effectiveness of the proposed modeling framework on a realworld license plate recognition LPR data set. The results confirm the advantage of statistical learning methods in modeling sparse individual travel behavior data. This type of pattern discovery and anomaly detection applications can provide useful insights for traffic monitoring, law enforcement, and individual travel behavior profiling.
Complete dimensional collapse in the continuum limit of a delayed SEIQR network model with separable distributed infectivity ; We take up a recently proposed compartmental SEIQR model with delays, ignore loss of immunity in the context of a fast pandemic, extend the model to a network structured on infectivity, and consider the continuum limit of the same with a simple separable interaction model for the infectivities beta. Numerical simulations show that the evolving dynamics of the network is effectively captured by a single scalar function of time, regardless of the distribution of beta in the population. The continuum limit of the network model allows a simple derivation of the simpler model, which is a single scalar delay differential equation DDE, wherein the variation in beta appears through an integral closely related to the moment generating function of usqrtbeta. If the first few moments of u exist, the governing DDE can be expanded in a series that shows a direct correspondence with the original compartmental DDE with a single beta. Even otherwise, the new scalar DDE can be solved using either numerical integration over u at each time step, or with the analytical integral if available in some useful form. Our work provides a new academic example of complete dimensional collapse, ties up an underlying continuum model for a pandemic with a simplerseeming compartmental model, and will hopefully lead to new analysis of continuum models for epidemics.
A study on Cubic Galileon Gravity Using Nbody Simulations ; We use Nbody simulation to study the structure formation in the Cubic Galileon Gravity model where along with the usual kinetic and potential term we also have a higher derivative selfinteraction term. We find that the large scale structure provides a unique constraining power for this model. The matter power spectrum, halo mass function, galaxygalaxy weak lensing signal, marked density power spectrum as well as count in cell are measured. The simulations show that there are less massive halos in the Cubic Galileon Gravity model than corresponding LambdaCDM model and the marked density power spectrum in these two models are different by more than 10. Furthermore, the Cubic Galileon model shows significant differences in voids compared to LambdaCDM. The number of low density cells is far higher in the Cubic Galileon model than that in the LambdaCDM model. Therefore, it would be interesting to put constraints on this model using future large scale structure observations, especially in void regions.
The Effect of the MultiLayer Text Summarization Model on the Efficiency and Relevancy of the Vector Spacebased Information Retrieval ; The massive upload of text on the internet creates a huge inverted index in information retrieval systems, which hurts their efficiency. The purpose of this research is to measure the effect of the MultiLayer Similarity model of the automatic text summarization on building an informative and condensed invert index in the IR systems. To achieve this purpose, we summarized a considerable number of documents using the MultiLayer Similarity model, and we built the inverted index from the automatic summaries that were generated from this model. A series of experiments were held to test the performance in terms of efficiency and relevancy. The experiments include comparisons with three existing text summarization models; the Jaccard Coefficient Model, the Vector Space Model, and the Latent Semantic Analysis model. The experiments examined three groups of queries with manual and automatic relevancy assessment. The positive effect of the MultiLayer Similarity in the efficiency of the IR system was clear without noticeable loss in the relevancy results. However, the evaluation showed that the traditional statistical models without semantic investigation failed to improve the information retrieval efficiency. Comparing with the previous publications that addressed the use of summaries as a source of the index, the relevancy assessment of our work was higher, and the MultiLayer Similarity retrieval constructed an inverted index that was 58 smaller than the main corpus inverted index.
Etapairing states as true scars in an extended Hubbard Model ; The etapairing states are a set of exactly known eigenstates of the Hubbard model on hypercubic lattices, first discovered by Yang Phys. Rev. Lett. 63, 2144 1989. These states are not manybody scar states in the Hubbard model because they occupy unique symmetry sectors defined by the socalled etapairing SU2 symmetry. We study an extended Hubbard model with bondcharge interactions, popularized by Hirsch Physica C 158, 326 1989, where the etapairing states survive without the etapairing symmetry and become true scar states. We also discuss similarities between the etapairing states and exact scar towers in the spin1 XY model found by Schecter and Iadecola Phys. Rev. Lett. 123, 147201 2019, and systematically arrive at all nearestneighbor terms that preserve such scar towers in 1D. We also generalize these terms to arbitrary bipartite lattices. Our study of the spin1 XY model also leads us to several new scarred models, including a spin12 J1J2 model with DzyaloshinkskiiMoriya interaction, in realistic quantum magnet settings in 1D and 2D.
Learning ContextBased Nonlocal Entropy Modeling for Image Compression ; The entropy of the codes usually serves as the rate loss in the recent learned lossy image compression methods. Precise estimation of the probabilistic distribution of the codes plays a vital role in the performance. However, existing deep learning based entropy modeling methods generally assume the latent codes are statistically independent or depend on some side information or local context, which fails to take the global similarity within the context into account and thus hinder the accurate entropy estimation. To address this issue, we propose a nonlocal operation for context modeling by employing the global similarity within the context. Specifically, we first introduce the proxy similarity functions and spatial masks to handle the missing reference problem in context modeling. Then, we combine the local and the global context via a nonlocal attention block and employ it in masked convolutional networks for entropy modeling. The entropy model is further adopted as the rate loss in a joint ratedistortion optimization to guide the training of the analysis transform and the synthesis transform network in transforming coding framework. Considering that the width of the transforms is essential in training low distortion models, we finally produce a UNet block in the transforms to increase the width with manageable memory consumption and time complexity. Experiments on Kodak and Tecnick datasets demonstrate the superiority of the proposed contextbased nonlocal attention block in entropy modeling and the UNet block in low distortion compression against the existing image compression standards and recent deep image compression models.
A framework for probabilistic weather forecast postprocessing across models and lead times using machine learning ; Forecasting the weather is an increasingly data intensive exercise. Numerical Weather Prediction NWP models are becoming more complex, with higher resolutions, and there are increasing numbers of different models in operation. While the forecasting skill of NWP models continues to improve, the number and complexity of these models poses a new challenge for the operational meteorologist how should the information from all available models, each with their own unique biases and limitations, be combined in order to provide stakeholders with wellcalibrated probabilistic forecasts to use in decision making In this paper, we use a road surface temperature example to demonstrate a threestage framework that uses machine learning to bridge the gap between sets of separate forecasts from NWP models and the 'ideal' forecast for decision support probabilities of future weather outcomes. First, we use Quantile Regression Forests to learn the error profile of each numerical model, and use these to apply empiricallyderived probability distributions to forecasts. Second, we combine these probabilistic forecasts using quantile averaging. Third, we interpolate between the aggregate quantiles in order to generate a full predictive distribution, which we demonstrate has properties suitable for decision support. Our results suggest that this approach provides an effective and operationally viable framework for the cohesive postprocessing of weather forecasts across multiple models and lead times to produce a wellcalibrated probabilistic output.
Plasma sheet thinning due to loss of nearEarth magnetotail plasma ; A onedimensional model for thinning of the Earth's plasma sheet J. K. Chao et al., Planet. Space Sci. 25, 703 1977 according to the Current Disruption CD model of auroral breakup is extended to two dimensions. A rarefaction wave, which is a signature component of the CD model, is generated with an initial disturbance. In the 1D gas model, the rarefaction wave propagates tailward at sound velocity and is assumed to cause thinning. Extending to a 2D gas model of a simplified plasma sheet configuration, the rarefaction wave is weakened, and the thinning ceases to propagate. Extending further to a 2D plasma model by adding magnetic field into the lobes, the rarefaction wave is quickly lost in the plasma sheet recompression, but the plasma sheet thinning is still present and propagates independently at a slower velocity than a 1D model suggests. This shows that the dynamics of plasma sheet thinning may be dominated by sheetlobe interactions that are absent from the 1D model and may not support the behaviour assumed by the CD model.
Exploring Quality and Generalizability in Parameterized Neural Audio Effects ; Deep neural networks have shown promise for music audio signal processing applications, often surpassing prior approaches, particularly as endtoend models in the waveform domain. Yet results to date have tended to be constrained by low sample rates, noise, narrow domains of signal types, andor lack of parameterized controls i.e. knobs, making their suitability for professional audio engineering workflows still lacking. This work expands on prior research published on modeling nonlinear timedependent signal processing effects associated with music production by means of a deep neural network, one which includes the ability to emulate the parameterized settings you would see on an analog piece of equipment, with the goal of eventually producing commercially viable, high quality audio, i.e. 44.1 kHz sampling rate at 16bit resolution. The results in this paper highlight progress in modeling these effects through architecture and optimization changes, towards increasing computational efficiency, lowering signaltonoise ratio, and extending to a larger variety of nonlinear audio effects. Toward these ends, the strategies employed involved a threepronged approach model speed, model accuracy, and model generalizability. Most of the presented methods provide marginal or no increase in output accuracy over the original model, with the exception of dataset manipulation. We found that limiting the audio content of the dataset, for example using datasets of just a single instrument, provided a significant improvement in model accuracy over models trained on more general datasets.
Indexing Data on the Web A Comparison of Schemalevel Indices for Data Search Extended Technical Report ; Indexing the Web of Data offers many opportunities, in particular, to find and explore data sources. One major design decision when indexing the Web of Data is to find a suitable index model, i.e., how to index and summarize data. Various efforts have been conducted to develop specific index models for a given task. With each index model designed, implemented, and evaluated independently, it remains difficult to judge whether an approach generalizes well to another task, set of queries, or dataset. In this work, we empirically evaluate six representative index models with unique feature combinations. Among them is a new index model incorporating inferencing over RDFS and owlsameAs. We implement all index models for the first time into a single, streambased framework. We evaluate variations of the index models considering subgraphs of size 0, 1, and 2 hops on two large, realworld datasets. We evaluate the quality of the indices regarding the compression ratio, summarization ratio, and F1score denoting the approximation quality of the streambased index computation. The experiments reveal huge variations in compression ratio, summarization ratio, and approximation quality for different index models, queries, and datasets. However, we observe meaningful correlations in the results that help to determine the right index model for a given task, type of query, and dataset.