text
stringlengths
62
2.94k
Hidden Sector Dark Matter and the Galactic Center GammaRay Excess A Closer Look ; Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gammaray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal kinetic mixing with Standard Model hypercharge, through the Higgs portal mixing with the Standard Model Higgs boson, or both. In each case, we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gammaray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmicray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.
A general framework for datadriven uncertainty quantification under complex input dependencies using vine copulas ; Systems subject to uncertain inputs produce uncertain responses. Uncertainty quantification UQ deals with the estimation of statistics of the system response, given a computational model of the system and a probabilistic model of its inputs. In engineering applications it is common to assume that the inputs are mutually independent or coupled by a Gaussian or elliptical dependence structure copula. In this paper we overcome such limitations by modelling the dependence structure of multivariate inputs as vine copulas. Vine copulas are models of multivariate dependence built from simpler paircopulas. The vine representation is flexible enough to capture complex dependencies. This paper formalises the framework needed to build vine copula models of multivariate inputs and to combine them with virtually any UQ method. The framework allows for a fully automated, datadriven inference of the probabilistic input model on available input data. The procedure is exemplified on two finite element models of truss structures, both subject to inputs with nonGaussian dependence structures. For each case, we analyse the moments of the model response using polynomial chaos expansions, and perform a structural reliability analysis to calculate the probability of failure of the system using the first order reliability method and importance sampling. Reference solutions are obtained by Monte Carlo simulation. The results show that, while the Gaussian assumption yields biased statistics, the vine copula representation achieves significantly more precise estimates, even when its structure needs to be fully inferred from a limited amount of observations.
Response Ranking with Deep Matching Networks and External Knowledge in Informationseeking Conversation Systems ; Intelligent personal assistant systems with either textbased or voicebased conversational interfaces are becoming increasingly popular around the world. Retrievalbased conversation models have the advantages of returning fluent and informative responses. Most existing studies in this area are on open domain chitchat conversations or task transaction oriented conversations. More research is needed for informationseeking conversations. There is also a lack of modeling external knowledge beyond the dialog utterances among current conversational models. In this paper, we propose a learning framework on the top of deep neural matching networks that leverages external knowledge for response ranking in informationseeking conversation systems. We incorporate external knowledge into deep neural models with pseudorelevance feedback and QA correspondence knowledge distillation. Extensive experiments with three informationseeking conversation data sets including both open benchmarks and commercial data show that, our methods outperform various baseline methods including several deep text matching models and the stateoftheart method on response selection in multiturn conversations. We also perform analysis over different response types, model variations and ranking examples. Our models and research findings provide new insights on how to utilize external knowledge with deep neural models for response selection and have implications for the design of the next generation of informationseeking conversation systems.
An Unsupervised ClusteringBased ShortTerm Solar Forecasting Methodology Using MultiModel Machine Learning Blending ; Solar forecasting accuracy is affected by weather conditions, and weather awareness forecasting models are expected to improve the performance. However, it may not be available and reliable to classify different forecasting tasks by using only meteorological weather categorization. In this paper, an unsupervised clusteringbased UCbased solar forecasting methodology is developed for shortterm 1hourahead global horizontal irradiance GHI forecasting. This methodology consists of three parts GHI time series unsupervised clustering, pattern recognition, and UCbased forecasting. The daily GHI time series is first clustered by an Optimized Crossvalidated ClUsteRing OCCUR method, which determines the optimal number of clusters and best clustering results. Then, support vector machine pattern recognition SVMPR is adopted to recognize the category of a certain day using the first few hours' data in the forecasting stage. GHI forecasts are generated by the most suitable models in different clusters, which are built by a twolayer Machine learning based MultiModel M3 forecasting framework. The developed UCbased methodology is validated by using 1year of data with six solar features. Numerical results show that i UCbased models outperform nonUC allinone models with the same M3 architecture by approximately 20; ii M3based models also outperform the singlealgorithm machine learning SAML models by approximately 20.
Lowenergy electrons in GRB afterglow models ; Observations of gammaray burst GRB afterglows have long provided the most detailed information about the origin of this spectacular phenomena. The model that is most commonly used to extract physical properties of the event from the observations is the relativistic fireball model, where ejected material moving at relativistic speeds creates a shock wave when it interacts with the surrounding medium. Electrons are accelerated in the shock wave, generating the observed synchrotron emission through interactions with the magnetic field in the downstream medium. It is usually assumed that the accelerated electrons follow a simple powerlaw distribution in energy between specific energy boundaries and that no electron exists outside these boundaries. This work explores the consequences of adding a lowenergy powerlaw segment to the electron distribution whose energy contributes insignificantly to the total energy budget of the distribution. The lowenergy electrons have a significant impact on the radio emission, providing synchrotron absorption and emission at these long wavelengths. Shorter wavelengths are affected through the normalization of the distribution. The new model is used to analyze the light curves of GRB 990510 and the resulting parameters compared to a model without the extra electrons. The quality of the fit and the best fit parameters are significantly affected by the additional model component. The new component is in one case found to strongly affect the Xray light curves showing how changes to the model at radio frequencies can affect light curves at other frequencies through changes in best fit model parameters.
Market SelfLearning of Signals, Impact and Optimal Trading Invisible Hand Inference with Free Energy ; We present a simple model of a nonequilibrium selforganizing market where asset prices are partially driven by investment decisions of a boundedrational agent. The agent acts in a stochastic market environment driven by various exogenous alpha signals, agent's own actions via market impact, and noise. Unlike traditional agentbased models, our agent aggregates all traders in the market, rather than being a representative agent. Therefore, it can be identified with a boundedrational component of the market itself, providing a particular implementation of an Invisible Hand market mechanism. In such setting, market dynamics are modeled as a fictitious selfplay of such boundedrational marketagent in its adversarial stochastic environment. As rewards obtained by such selfplaying market agent are not observed from market data, we formulate and solve a simple model of such market dynamics based on a neuroscienceinspired Bounded Rational Information Theoretic Inverse Reinforcement Learning BRITIRL. This results in effective asset price dynamics with a nonlinear mean reversion which in our model is generated dynamically, rather than being postulated. We argue that our model can be used in a similar way to the BlackLitterman model. In particular, it represents, in a simple modeling framework, market views of common predictive signals, market impacts and implied optimal dynamic portfolio allocations, and can be used to assess values of private signals. Moreover, it allows one to quantify a marketimplied optimal investment strategy, along with a measure of market rationality. Our approach is numerically light, and can be implemented using standard offtheshelf software such as TensorFlow.
Trusted Neural Networks for SafetyConstrained Autonomous Control ; We propose Trusted Neural Network TNN models, which are deep neural network models that satisfy safety constraints critical to the application domain. We investigate different mechanisms for incorporating rulebased knowledge in the form of firstorder logic constraints into a TNN model, where rules that encode safety are accompanied by weights indicating their relative importance. This framework allows the TNN model to learn from knowledge available in form of data as well as logical rules. We propose multiple approaches for solving this problem a a multiheaded model structure that allows tradeoff between satisfying logical constraints and fitting training data in a unified training framework, and b creating a constrained optimization problem and solving it in dual formulation by posing a new constrained loss function and using a proximal gradient descent algorithm. We demonstrate the efficacy of our TNN framework through experiments using the opensource TORCSciteBernhardCAA15 3D simulator for selfdriving cars. Experiments using our first approach of a multiheaded TNN model, on a dataset generated by a customized version of TORCS, show that 1 adding safety constraints to a neural network model results in increased performance and safety, and 2 the improvement increases with increasing importance of the safety constraints. Experiments were also performed using the second approach of proximal algorithm for constrained optimization they demonstrate how the proposed method ensures that 1 the overall TNN model satisfies the constraints even when the training data violates some of the constraints, and 2 the proximal gradient descent algorithm on the constrained objective converges faster than the unconstrained version.
Joint Autoregressive and Hierarchical Priors for Learned Image Compression ; Recent models for learned image compression are based on autoencoders, learning approximately invertible mappings from pixels to a quantized latent representation. These are combined with an entropy model, a prior on the latent representation that can be used with standard arithmetic coding algorithms to yield a compressed bitstream. Recently, hierarchical entropy models have been introduced as a way to exploit more structure in the latents than simple fully factorized priors, improving compression performance while maintaining endtoend optimization. Inspired by the success of autoregressive priors in probabilistic generative models, we examine autoregressive, hierarchical, as well as combined priors as alternatives, weighing their costs and benefits in the context of image compression. While it is well known that autoregressive models come with a significant computational penalty, we find that in terms of compression performance, autoregressive and hierarchical priors are complementary and, together, exploit the probabilistic structure in the latents better than all previous learned models. The combined model yields stateoftheart ratedistortion performance, providing a 15.8 average reduction in file size over the previous stateoftheart method based on deep learning, which corresponds to a 59.8 size reduction over JPEG, more than 35 reduction compared to WebP and JPEG2000, and bitstreams 8.4 smaller than BPG, the current stateoftheart image codec. To the best of our knowledge, our model is the first learningbased method to outperform BPG on both PSNR and MSSSIM distortion metrics.
RichardsonGaudin models and broken integrability ; This thesis presents an introduction to the class of RichardsonGaudin integrable models, with special focus on the Bethe ansatz wave function, and investigates ways of applying the properties of RichardsonGaudin models both in and out of integrability. A framework is outlined for the numerical and theoretical treatment of these systems, exposing a duality allowing the Bethe equations to be solved numerically. This is extended to the calculation of inner products and correlation functions. Using this framework, the influence of particle exchange on the Bethe ansatz is discussed, after which it is shown how the Bethe ansatz is able to accurately model wave functions of nonintegrable models in two different settings. First, a variational approach is outlined for stationary models where integrabilitybreaking perturbations are explicitly introduced. Second, an alternative way of breaking integrability is through the introduction of dynamics and periodic driving, where it is shown how integrability can be used to model the resulting Floquet manybody resonances. Throughout this work, it is shown how the clearcut structure and relatively large freedom in RichardsonGaudin models makes them ideal for an investigation of the general principles of integrability, as well as being a perfect testing ground for the development of new quantum manybody techniques beyond integrability.
Multifeature universe large parameter space cosmology and the swampland ; Under the belief that the universe should be multifeature and informative, we employ a modelbymodel comparison method to explore the possibly largest upper bound on the swampland constant c. Considering the interacting quintessence dark energy as the comparison model, we constrain the large parameter space interacting dark energy model, a 12parameter extension to the LambdaCDM cosmology, in light of current observations. We obtain the largest 2sigma 3sigma bound so far, clesssim1.62 1.94, which would allow the existences of a number of string theory models of dark energy such as 11dimensional supergravity with doubleexponential potential, O16times O16 heterotic string and some Type II string compactifications. For inflationary models with concave potential, we find the 2sigma 3sigma bound clesssim0.13 0.14, which is still in strong tension with the stringbased expectation c sim mathcalO1. However, combining Planck primordial nonGaussianity with inflation constraints, it is interesting that the DiracBornInfeld inflation with concave potential gives the 2sigma 3sigma bound clesssim0.53 0.58, which is now in a modest tension with the swampland conjecture. Using the Bayesian evidence as the model selection tool, it is very surprising that our 18parameter multifeature cosmology is extremely strongly favored over the LambdaCDM model.
Double Diffusion Encoding Prevents Degeneracy in Parameter Estimation of Biophysical Models in Diffusion MRI ; Purpose Biophysical tissue models are increasingly used in the interpretation of diffusion MRI dMRI data, with the potential to provide specific biomarkers of brain microstructural changes. However, the general Standard Model has recently shown that model parameter estimation from dMRI data is illposed unless very strong magnetic gradients are used. We analyse this issue for the Neurite Orientation Dispersion and Density Imaging with Diffusivity Assessment NODDIDA model and demonstrate that its extension from Single Diffusion Encoding SDE to Double Diffusion Encoding DDE solves the illposedness and increases the accuracy of the parameter estimation. Methods We analyse theoretically the cumulant expansion up to fourth order in b of SDE and DDE signals. Additionally, we perform in silico experiments to compare SDE and DDE capabilities under similar noise conditions. Results We prove analytically that DDE provides invariant information nonaccessible from SDE, which makes the NODDIDA parameter estimation injective. The in silico experiments show that DDE reduces the bias and mean square error of the estimation along the whole feasible region of 5D model parameter space. Conclusions DDE adds additional information for estimating the model parameters, unexplored by SDE, which is enough to solve the degeneracy in the NODDIDA model parameter estimation.
Discovering timevarying aeroelastic models of a longspan suspension bridge from field measurements by sparse identification of nonlinear dynamical systems ; We develop datadriven dynamical models of the nonlinear aeroelastic effects on a longspan suspension bridge from sparse, noisy sensor measurements which monitor the bridge. Using the em sparse identification of nonlinear dynamics SINDy algorithm, we are able to identify parsimonious, timevarying dynamical systems that capture vortexinduced vibration VIV events in the bridge. Thus we are able to posit new, datadriven models highlighting the aeroelastic interaction of the bridge structure with VIV events. The bridge dynamics are shown to have distinct, timedependent modes of behavior, thus requiring parametric models to account for the diversity of dynamics. Our method generates hitherto unknown bridgewind interaction models that go beyond current theoretical and computational descriptions. Our proposed method for realtime monitoring and model discovery allow us to move our model predictions beyond lab theory to practical engineering design, which has the potential to assess bad engineering configurations that are susceptible to deleterious bridgewind interactions. With the rise of realtime sensor networks on major bridges, our model discovery methods can enhance an engineers ability to assess the nonlinear aeroelastic interactions of the bridge with its wind environment.
Dark sector unifications dark matterphantom energy, dark matter constant w dark energy, dark matterdark energydark matter ; The paper brings a novel approach to unification of dark matter and dark energy in terms of a cosmic fluid. A model is introduced in which the cosmic fluid speed of sound squared is defined as a function of its equation of state EoS parameter. It is shown how logarithmic part of this function results in dynamical regimes previously not observed in cosmic fluid models. It is shown that in a particular dynamical regime the model behaves as a unification of dark matter and phantom dark energy. Further, it is shown that the model may describe dark matter dark energy unification in which dark energy asymptotically behaves as dark energy with a constant EoS parameter larger than 1. In a specific parameter regime the unified fluid model also reproduces global expansion similar to LambdaCDM model with fluid speed of sound vanishing for small scale factor values and being small, or even vanishing, for large scale factor values. Finally, it is shown how the model may be instrumental in describing the cosmic fluid dark matterdark energydark matter unification. Physical constraints on model parameters yielding such transient dark energy behavior are obtained.
Comprehensive review of models and methods for inferences in biochemical reaction networks ; Key processes in biological and chemical systems are described by networks of chemical reactions. From molecular biology to biotechnology applications, computational models of reaction networks are used extensively to elucidate their nonlinear dynamics. Model dynamics are crucially dependent on parameter values which are often estimated from observations. Over past decade, the interest in parameter and state estimation in models of biochemical reaction networks BRNs grew considerably. Statistical inference problems are also encountered in many other tasks including model calibration, discrimination, identifiability and checking as well as optimum experiment design, sensitivity analysis, bifurcation analysis and other. The aim of this review paper is to explore developments of past decade to understand what BRN models are commonly used in literature, and for what inference tasks and inference methods. Initial collection of about 700 publications excluding books in computational biology and chemistry were screened to select over 260 research papers and 20 graduate theses concerning estimation problems in BRNs. The paper selection was performed as text mining using scripts to automate search for relevant keywords and terms. The outcome are tables revealing the level of interest in different inference tasks and methods for given models in literature as well as recent trends. In addition, a brief survey of general estimation strategies is provided to facilitate understanding of estimation methods which are used for BRNs. Our findings indicate that many combinations of models, tasks and methods are still relatively sparse representing new research opportunities to explore those that have not been considered perhaps for a good reason. The paper concludes by discussing future research directions including research problems which cannot be directly deduced from presented tables.
Unitary Extension of Exotic Massive 3D Gravity from Bigravity ; We obtain a new 3D gravity model from two copies of parityodd EinsteinCartan theories. Using Hamiltonian analysis, we demonstrate that the only local degrees of freedom are two massive spin2 modes. Unitarity of the model in antide Sitter and Minkowski backgrounds can be satisfied for vast choices of the parameters without finetuning. The recent exotic massive 3D gravity model arises as a limiting case of the new model. We also show that there exist trajectories on the parameter space of the new model which cross the boundary between unitary and nonunitary regions. At the crossing point, one massive graviton decouples resulting in a unitary model with just one bulk degree of freedom but two positive central charges at odds with the usual expectation that the critical model has at least one vanishing central charge. Given the fact that a suitable nonrelativistic version of bigravity has been used as an effective theory for gapped spin2 fractional quantum Hall states, our model may have interesting applications in condensed matter physics.
Learning higherorder sequential structure with cloned HMMs ; Variable order sequence modeling is an important problem in artificial and natural intelligence. While overcomplete Hidden Markov Models HMMs, in theory, have the capacity to represent longterm temporal structure, they often fail to learn and converge to local minima. We show that by constraining HMMs with a simple sparsity structure inspired by biology, we can make it learn variable order sequences efficiently. We call this model cloned HMM CHMM because the sparsity structure enforces that many hidden states map deterministically to the same emission state. CHMMs with over 1 billion parameters can be efficiently trained on GPUs without being severely affected by the credit diffusion problem of standard HMMs. Unlike ngrams and sequence memoizers, CHMMs can model temporal dependencies at arbitrarily long distances and recognize contexts with 'holes' in them. Compared to Recurrent Neural Networks and their Long ShortTerm Memory extensions LSTMs, CHMMs are generative models that can natively deal with uncertainty. Moreover, CHMMs return a higherorder graph that represents the temporal structure of the data which can be useful for community detection, and for building hierarchical models. Our experiments show that CHMMs can beat ngrams, sequence memoizers, and LSTMs on characterlevel language modeling tasks. CHMMs can be a viable alternative to these methods in some tasks that require variable order sequence modeling and the handling of uncertainty.
Explainable AI for Trees From Local Explanations to Global Understanding ; Treebased machine learning models such as random forests, decision trees, and gradient boosted trees are the most popular nonlinear predictive models used in practice today, yet comparatively little attention has been paid to explaining their predictions. Here we significantly improve the interpretability of treebased models through three main contributions 1 The first polynomial time algorithm to compute optimal explanations based on game theory. 2 A new type of explanation that directly measures local feature interaction effects. 3 A new set of tools for understanding global model structure based on combining many local explanations of each prediction. We apply these tools to three medical machine learning problems and show how combining many highquality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to i identify high magnitude but low frequency nonlinear mortality risk factors in the general US population, ii highlight distinct population subgroups with shared risk characteristics, iii identify nonlinear interaction effects among risk factors for chronic kidney disease, and iv monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of treebased machine learning models, these improvements to their interpretability have implications across a broad set of domains.
Leveraging Bayesian Analysis To Improve Accuracy of Approximate Models ; We focus on improving the accuracy of an approximate model of a multiscale dynamical system that uses a set of parameterdependent terms to account for the effects of unresolved or neglected dynamics on resolved scales. We start by considering various methods of calibrating and analyzing such a model given a few wellresolved simulations. After presenting results for various point estimates and discussing some of their shortcomings, we demonstrate a the potential of hierarchical Bayesian analysis to uncover previously unanticipated physical dependencies in the approximate model, and b how such insights can then be used to improve the model. In effect parametric dependencies found from the Bayesian analysis are used to improve structural aspects of the model. While we choose to illustrate the procedure in the context of a closure model for buoyancydriven, variabledensity turbulence, the statistical nature of the approach makes it more generally applicable. Towards addressing issues of increased computational cost associated with the procedure, we demonstrate the use of a neural network based surrogate in accelerating the posterior sampling process and point to recent developments in variational inference as an alternative methodology for greatly mitigating such costs. We conclude by suggesting that modern validation and uncertainty quantification techniques such as the ones we consider have a valuable role to play in the development and improvement of approximate models.
Cosmological constraints and phenomenology of a beyondHorndeski model ; We study observational constraints on a specific dark energy model in the framework of GleyzesLangloisPiazzaVernizzi theories, which extends the Galileon ghost condensate GGC to the domain of beyond Horndeski theories. In this model, we show that the Planck cosmic microwave background CMB data, combined with datasets of baryon acoustic oscillations, supernovae type Ia, and redshiftspace distortions, give the tight upper bound alpharm H0 le cal O106 on today's beyondHorndeski BH parameter alpharm H. This is mostly attributed to the shift of CMB acoustic peaks induced by the earlytime changes of cosmological background and perturbations arising from the dominance of alpharm H in the dark energy density. In comparison to the Lambdacolddarkmatter LambdaCDM model, our BH model suppresses the largescale integratedSachsWolfe ISW tail of CMB temperature anisotropies due to the existence of cubic Galileons, and it modifies the smallscale CMB power spectrum because of the different background evolution. We find that the BH model considered fits the data better than LambdaCDM according to the chi2 statistics, yet the deviance information criterion DIC slightly favors the latter. Given the fact that our BH model with alpharm H0 i.e., the GGC model is favored over LambdaCDM even by the DIC, there are no particular signatures for the departure from Horndeski theories in current observations.
On the Inference of Applying Gaussian Process Modeling to a Deterministic Function ; Gaussian process modeling is a standard tool for building emulators for computer experiments, which are usually used to study deterministic functions, for example, a solution to a given system of partial differential equations. This work investigates applying Gaussian process modeling to a deterministic function from prediction and uncertainty quantification perspectives, where the Gaussian process model is misspecified. Specifically, we consider the case where the underlying function is fixed and from a reproducing kernel Hilbert space generated by some kernel function, and the same kernel function is used in the Gaussian process modeling as the correlation function for prediction and uncertainty quantification. While upper bounds and the optimal convergence rate of prediction in the Gaussian process modeling have been extensively studied in the literature, a comprehensive exploration of convergence rates and theoretical study of uncertainty quantification is lacking. We prove that, if one uses maximum likelihood estimation to estimate the variance in Gaussian process modeling, under different choices of the regularization parameter value, the predictor is not optimal andor the confidence interval is not reliable. In particular, lower bounds of the prediction error under different choices of the regularization parameter value are obtained. The results indicate that, if one directly applies Gaussian process modeling to a fixed function, the reliability of the confidence interval and the optimality of the predictor cannot be achieved at the same time.
Hybrid EMTTS Simulation Strategies to Study High Bandwidth MMCBased HVdc Systems ; Modular multilevel converters MMCs are widely used in the design of modern highvoltage direct current HVdc transmission system. Highfidelity dynamic models of MMCsbased HVdc system require small simulation time step and can be accurately modeled in electromagnetic transient EMT simulation programs. The EMT program exhibits slow simulation speed and limitation on the size of the model and brings certain challenges to test the highfidelity HVdc model in systemlevel simulations. This paper presents the design and implementation of a hybrid simulation framework, which enables the cosimulation of the EMT model of AtlantaOrlando HVdc line and the transient stability TS model of the entire Eastern Interconnection system. This paper also introduces the implementation of two highfidelity HVdc line models simulated at different time steps and discusses a dedicated method for sizing the buffer areas on both sides of the HVdc line. The simulation results of the two HVdc models with different sizes of buffer areas are presented and compared.
Value of Information Analysis via Active Learning and Knowledge Sharing in ErrorControlled Adaptive Kriging ; Large uncertainties in many phenomena have challenged decision making. Collecting additional information to better characterize reducible uncertainties is among decision alternatives. Value of information VoI analysis is a mathematical decision framework that quantifies expected potential benefits of new data and assists with optimal allocation of resources for information collection. However, analysis of VoI is computational very costly because of the underlying Bayesian inference especially for equalitytype information. This paper proposes the first surrogatebased framework for VoI analysis. Instead of modeling the limit state functions describing events of interest for decision making, which is commonly pursued in surrogate modelbased reliability methods, the proposed framework models system responses. This approach affords sharing equalitytype information from observations among surrogate models to update likelihoods of multiple events of interest. Moreover, two knowledge sharing schemes called model and training points sharing are proposed to most effectively take advantage of the knowledge offered by costly model evaluations. Both schemes are integrated with an error ratebased adaptive training approach to efficiently generate accurate Kriging surrogate models. The proposed VoI analysis framework is applied for an optimal decisionmaking problem involving load testing of a truss bridge. While stateoftheart methods based on importance sampling and adaptive Kriging Monte Carlo simulation are unable to solve this problem, the proposed method is shown to offer accurate and robust estimates of VoI with a limited number of model evaluations. Therefore, the proposed method facilitates the application of VoI for complex decision problems.
Nonchaotic Models and Predictability of the Users' Volume Dynamics on Internet Platforms ; Internet platforms' traffic defines important characteristics of platforms, such as price of services, advertisements, speed of operations. The traffic is usually estimated with the help of the traditional time series models ARIMA, HoltWinters, etc., which are successful in short term extrapolations of sufficiently denoised signals. We propose a dynamical system approach for the modeling of the underlying process. The method allows to discuss the global qualitative properties of the dynamics' phase portrait and long term tendencies. The proposed models are nonchaotic, the long term prediction is reliable, and it explains the fundamental properties and trend of various types of digital platforms. Because of these properties, we call the flow of these models the it trending flow. Utilizing the new approach, we construct the twosided platform models for the volume of users, that can be applied to Amazon.com, Homes.mil or Wikipedia.org. We consider a generalization of the twosided platforms' models to multisided platforms. If the equations' are cooperative, the flow is trending, and it helps to understand the properties of the platforms and reliably predicts the long term behavior. We show how to reconstruct the governing differential equations from time series data. The external effects are modeled as system's parameters initial conditions.
DataDriven Symbol Detection via ModelBased Machine Learning ; The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a datadriven framework to symbol detection design which combines machine learning ML and modelbased algorithms. In this hybrid approach, wellknown channelmodelbased algorithms such as the Viterbi method, BCJR detection, and multipleinput multipleoutput MIMO soft interference cancellation SIC are augmented with MLbased algorithms to remove their channelmodeldependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting datadriven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not wellcapture the underlying physics. Our approach is unique in that it only replaces the channelmodelbased computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield nearoptimal performance of modelbased algorithms without knowing the exact channel inputoutput statistical relationship and in the presence of channel state information uncertainty.
Supersymmetry method for interacting chaotic and disordered systems the SYK model ; The nonlinear supermatrix sigma model is widely used to understand the physics of Anderson localization and the level statistics in noninteracting disordered electron systems. In contrast to the general belief that the supersymmetry method applies only to systems of noninteracting particles, we adopt this approach to the disorder averaging in the interacting models. In particular, we apply supersymmetry to study the SachdevYeKitaev SYK model, where the disorder averaging has so far been performed only within the replica approach. We use a slightly modified, timereversal invariant version of the SYK model and perform calculations in realtime. As a demonstration of how the supersymmetry method works, we derive saddle point equations. In the semiclassical limit, we show that the results are in agreement with those found using the replica technique. We also develop the formally exact superbosonized representation of the SYK model. In the latter, the supersymmetric theory of original fermions and their superpartner bosons is reformulated as a model of unconstrained collective excitations. We argue that the supersymmetry description of the model paves the way for precise calculations in SYKlike models used in condensed matter, gravity, and high energy physics.
Threedimensional corecollapse supernova simulations of massive and rotating progenitors ; We present threedimensional simulations of the corecollapse of massive rotating and nonrotating progenitors performed with the general relativistic neutrino hydrodynamics code CoCoNuTFMT and analyse their explosion properties and gravitationalwave signals. The progenitor models include WolfRayet stars with initial helium star masses of 39,Modot and 20,Modot, and an 18,Modot red supergiant. The 39,Modot model is a rapid rotator, whereas the two other progenitors are nonrotating. Both WolfRayet models produce healthy neutrinodriven explosions, whereas the red supergiant model fails to explode. By the end of the simulations, the explosion energies have already reached 1.1times 1051,mathrmerg and 0.6times 1051,mathrmerg for the 39,Modot and 20,Modot model, respectively. The explosions produce neutron stars of relatively high mass, but with modest kicks. Due to the alignment of the bipolar explosion geometry with the rotation axis, there is a relatively small misalignment of 30circ between the spin and the kick in the 39,Modot model. In terms of gravitationalwave signals, the massive and rapidly rotating 39,Modot progenitor stands out by large gravitationalwave amplitudes that would make it detectable out to almost 2 Mpc by the Einstein Telescope. For this model, we find that rotation significantly changes the dependence of the characteristic gravitationalwave frequency of the fmode on the protoneutron star parameters compared to the nonrotating case. The other two progenitors have considerably smaller detection distances, despite significant lowfrequency emission in the most sensitive frequency band of current gravitationalwave detectors due to the standing accretion shock instability in the 18,Modot model.
Search for dijet resonances in events with an isolated charged lepton using sqrts 13 TeV protonproton collision data collected by the ATLAS detector ; A search for dijet resonances in events with at least one isolated charged lepton is performed using 139textfb1 of sqrts13 TeV protonproton collision data recorded by the ATLAS detector at the LHC. The dijet invariantmass mjj distribution constructed from events with at least one isolated electron or muon is searched in the region 0.22 mjj 6.3 TeV for excesses above a smoothly falling background from Standard Model processes. Triggering based on the presence of a lepton in the event reduces limitations imposed by minimum transverse momentum thresholds for triggering on jets. This approach allows smaller dijet invariant masses to be probed than in inclusive dijet searches, targeting a variety of newphysics models, for example ones in which a new state is produced in association with a leptonically decaying W or Z boson. No statistically significant deviation from the Standard Model background hypothesis is found. Limits on contributions from generic Gaussian signals with widths ranging from that determined by the detector resolution up to 15 of the resonance mass are obtained for dijet invariant masses ranging from 0.25 TeV to 6 TeV. Limits are set also in the context of several scenarios beyond the Standard Model, such as the Sequential Standard Model, a technicolor model, a charged Higgs boson model and a simplified Dark Matter model.
Dynamical Models of Stock Prices Based on Technical Trading Rules Part II Analysis of the Models ; In Part II of this paper, we concentrate our analysis on the price dynamical model with the moving average rules developed in Part I of this paper. By decomposing the excessive demand function, we reveal that it is the interplay between trendfollowing and contrarian actions that generates the price chaos, and give parameter ranges for the price series to change from divergence to chaos and to oscillation. We prove that the price dynamical model has an infinite number of equilibrium points but all these equilibrium points are unstable. We demonstrate the shortterm predictability of the return volatility and derive the detailed formula of the Lyapunov exponent as function of the model parameters. We show that although the price is chaotic, the volatility converges to some constant very quickly at the rate of the Lyapunov exponent. We extract the formula relating the converged volatility to the model parameters based on MonteCarlo simulations. We explore the circumstances under which the returns show independency and illustrate in details how the independency index changes with the model parameters. Finally, we plot the strange attractor and return distribution of the chaotic price model to illustrate the complex structure and fattailed distribution of the returns.
ANDOR MultiValued Decision Diagrams AOMDDs for Graphical Models ; Inspired by the recently introduced framework of ANDOR search spaces for graphical models, we propose to augment MultiValued Decision Diagrams MDD with AND nodes, in order to capture function decomposition structure and to extend these compiled data structures to general weighted graphical models e.g., probabilistic models. We present the ANDOR MultiValued Decision Diagram AOMDD which compiles a graphical model into a canonical form that supports polynomial e.g., solution counting, belief updating or constant time e.g. equivalence of graphical models queries. We provide two algorithms for compiling the AOMDD of a graphical model. The first is searchbased, and works by applying reduction rules to the trace of the memory intensive ANDOR search algorithm. The second is inferencebased and uses a Bucket Elimination schedule to combine the AOMDDs of the input functions via the the APPLY operator. For both algorithms, the compilation time and the size of the AOMDD are, in the worst case, exponential in the treewidth of the graphical model, rather than pathwidth as is known for ordered binary decision diagrams OBDDs. We introduce the concept of semantic treewidth, which helps explain why the size of a decision diagram is often much smaller than the worst case bound. We provide an experimental evaluation that demonstrates the potential of AOMDDs.
Dynamic Hybrid Traffic Flow Modeling ; A flow of moving agents can be observed at different scales. Thus, in traffic modeling, three levels are generally considered the micro, meso and macro levels, representing respectively the interactions between vehicles, groups of vehicles sharing common properties such as a common destination or a common localization and flows of vehicles. Each approach is useful in a given context micro and meso models allow to simulate road networks with complex topologies such as urban area, while macro models allow to develop control strategies to prevent congestion in highways. However, to simulate largescale road networks, it can be interesting to integrate different representations, e.g., micro and macro, in a single model. Existing models share the same limitation connections between levels are fixed a priori and cannot be changed at runtime. Therefore, to be able to observe some emerging phenomena such as congestion formation or to find the exact location of a jam in a large macro section, a dynamic hybrid modeling approach is needed. In 2013 we started the development of a multilevel agentbased simulator called JAMFREE within the ISART project. It allows to simulate large road networks efficiently using a dynamic level of detail. This simulator relies on a multilevel agentbased modeling framework called SIMILAR.
Exploring dependence between categorical variables benefits and limitations of using variable selection within Bayesian clustering in relation to loglinear modelling with interaction terms ; This manuscript is concerned with relating two approaches that can be used to explore complex dependence structures between categorical variables, namely Bayesian partitioning of the covariate space incorporating a variable selection procedure that highlights the covariates that drive the clustering, and loglinear modelling with interaction terms. We derive theoretical results on this relation and discuss if they can be employed to assist loglinear model determination, demonstrating advantages and limitations with simulated and real data sets. The main advantage concerns sparse contingency tables. Inferences from clustering can potentially reduce the number of covariates considered and, subsequently, the number of competing loglinear models, making the exploration of the model space feasible. Variable selection within clustering can inform on marginal independence in general, thus allowing for a more efficient exploration of the loglinear model space. However, we show that the clustering structure is not informative on the existence of interactions in a consistent manner. This work is of interest to those who utilize loglinear models, as well as practitioners such as epidemiologists that use clustering models to reduce the dimensionality in the data and to reveal interesting patterns on how covariates combine.
Bayesian nonparametric comorbidity analysis of psychiatric disorders ; The analysis of comorbidity is an open and complex research field in the branch of psychiatry, where clinical experience and several studies suggest that the relation among the psychiatric disorders may have etiological and treatment implications. In this paper, we are interested in applying latent feature modeling to find the latent structure behind the psychiatric disorders that can help to examine and explain the relationships among them. To this end, we use the large amount of information collected in the National Epidemiologic Survey on Alcohol and Related Conditions NESARC database and propose to model these data using a nonparametric latent model based on the Indian Buffet Process IBP. Due to the discrete nature of the data, we first need to adapt the observation model for discrete random variables. We propose a generative model in which the observations are drawn from a multinomiallogit distribution given the IBP matrix. The implementation of an efficient Gibbs sampler is accomplished using the Laplace approximation, which allows integrating out the weighting factors of the multinomiallogit likelihood model. We also provide a variational inference algorithm for this model, which provides a complementary and less expensive in terms of computational complexity alternative to the Gibbs sampler allowing us to deal with a larger number of data. Finally, we use the model to analyze comorbidity among the psychiatric disorders diagnosed by experts from the NESARC database.
Exploring new models in all detail with SARAH ; I give an overview about the features the Mathematica package SARAH provides to study new models. In general, SARAH can handle a wide range of models beyond the MSSM coming with additional chiral superfields, extra gauge groups, or distinctive features like Dirac gaugino masses. All of these models can be implemented in a compact form in SARAH and are easy to use SARAH extracts all analytical properties of the given model like twoloop renormalization group equations, tadpole equations, mass matrices and vertices. Also one and twoloop corrections to tadpoles and selfenergies can be obtained. For numerical calculations SARAH can be interfaced to other tools to get the mass spectrum, to check flavour or dark matter constraints, and to test the vacuum stability, or to perform collider studies. In particular, the interface to SPheno allows a precise prediction of the Higgs mass in a given model comparable to MSSM precision by incorporating the important twoloop corrections. I show in great detail at the example of the BLSSM how SARAH together with SPheno, HiggsBoundsHiggsSignals, FlavorKit, Vevacious, CalcHep, MicrOmegas, WHIZARD, and MadGraph can be used to study all phenomenological aspects of a model. Even if I concentrate in this manuscript on the analysis of supersymmetric models most features are also available in the nonsupersymmetric case.
Persistence in voting behavior stronghold dynamics in elections ; Influence among individuals is at the core of collective social phenomena such as the dissemination of ideas, beliefs or behaviors, social learning and the diffusion of innovations. Different mechanisms have been proposed to implement interagent influence in social models from the voter model, to majority rules, to the Granoveter model. Here we advance in this direction by confronting the recently introduced Social Influence and Recurrent Mobility SIRM model, that reproduces generic features of voteshares at different geographical levels, with data in the US presidential elections. Our approach incorporates spatial and population diversity as inputs for the opinion dynamics while individuals' mobility provides a proxy for social context, and peer imitation accounts for social influence. The model captures the observed stationary background fluctuations in the voteshares across counties. We study the socalled political strongholds, i.e., locations where the votesshares for a party are systematically higher than average. A quantitative definition of a stronghold by means of persistence in time of fluctuations in the voting spatial distribution is introduced, and results from the US Presidential Elections during the period 19802012 are analyzed within this framework. We compare electoral results with simulations obtained with the SIRM model finding a good agreement both in terms of the number and the location of strongholds. The strongholds duration is also systematically characterized in the SIRM model. The results compare well with the electoral results data revealing an exponential decay in the persistence of the strongholds with time.
Enhanced Gravity Model of trade reconciling macroeconomic and network models ; The structure of the International Trade Network ITN, whose nodes and links represent world countries and their trade relations respectively, affects key economic processes worldwide, including globalization, economic integration, industrial production, and the propagation of shocks and instabilities. Characterizing the ITN via a simple yet accurate model is an open problem. The traditional Gravity Model GM successfully reproduces the volume of trade between connected countries, using macroeconomic properties such as GDP, geographic distance, and possibly other factors. However, it predicts a network with complete or homogeneous topology, thus failing to reproduce the highly heterogeneous structure of the ITN. On the other hand, recent maximumentropy network models successfully reproduce the complex topology of the ITN, but provide no information about trade volumes. Here we integrate these two currently incompatible approaches via the introduction of an Enhanced Gravity Model EGM of trade. The EGM is the simplest model combining the GM with the network approach within a maximumentropy framework. Via a unified and principled mechanism that is transparent enough to be generalized to any economic network, the EGM provides a new econometric framework wherein trade probabilities and trade volumes can be separately controlled by any combination of dyadic and countryspecific macroeconomic variables. The model successfully reproduces both the global topology and the local link weights of the ITN, parsimoniously reconciling the conflicting approaches. It also indicates that the probability that any two countries trade a certain volume should follow a geometric or exponential distribution with an additional point mass at zero volume.
Clustering Fossil from Primordial Gravitational Waves in Anisotropic Inflation ; Inflationary models can correlate smallscale density perturbations with the longwavelength gravitational waves GW in the form of the TensorScalarScalar TSS bispectrum. This correlation affects the massdistribution in the Universe and leads to the offdiagonal correlations of the density field modes in the form of the quadrupole anisotropy. Interestingly, this effect survives even after the tensor mode decays when it reenters the horizon, known as the fossil effect. As a result, the offdiagonal correlation function between different Fourier modes of the density fluctuations can be thought as a way to probe the largescale GW and the mechanism of inflation behind the fossil effect. Models of single field slow roll inflation generically predict a very small quadrupole anisotropy in TSS while in models of multiple fields inflation this effect can be observable. Therefore this large scale quadrupole anisotropy can be thought as a spectroscopy for different inflationary models. In addition, in models of anisotropic inflation there exists quadrupole anisotropy in curvature perturbation power spectrum. Here we consider TSS in models of anisotropic inflation and show that the shape of quadrupole anisotropy is different than in single field models. In addition in these models the quadrupole anisotropy is projected into the preferred direction and its amplitude is proportional to g Ne where Ne is the number of efolds and g is the amplitude of quadrupole anisotropy in curvature perturbation power spectrum. We use this correlation function to estimate the large scale GW as well as the preferred direction and discuss the detectability of the signal in the galaxy surveys like Euclid and 21 cm surveys.
Extending The Lossy SpringLoaded Inverted Pendulum Model with a SliderCrank Mechanism ; Spring Loaded Inverted Pendulum SLIP model has a long history in describing running behavior in animals and humans as well as has been used as a design basis for robots capable of dynamic locomotion. Anchoring the SLIP for lossy physical systems resulted in newer models which are extended versions of original SLIP with viscous damping in the leg. However, such lossy models require an additional mechanism for pumping energy to the system to control the locomotion and to reach a limitcycle. Some studies solved this problem by adding an actively controllable torque actuation at the hip joint and this actuation has been successively used in many robotic platforms, such as the popular RHex robot. However, hip torque actuation produces forces on the COM dominantly at forward direction with respect to ground, making height control challenging especially at slow speeds. The situation becomes more severe when the horizontal speed of the robot reaches zero, i.e. steady hoping without moving in horizontal direction, and the system reaches to singularity in which vertical degrees of freedom is completely lost. To this end, we propose an extension of the lossy SLIP model with a slidercrank mechanism, SLIP SCM, that can generate a stable limitcycle when the body is constrained to vertical direction. We propose an approximate analytical solution to the nonlinear system dynamics of SLIP SCM model to characterize its behavior during the locomotion. Finally, we perform a fixedpoint stability analysis on SLIPSCM model using our approximate analytical solution and show that proposed model exhibits stable behavior in our range of interest.
Detecting adaptive evolution in phylogenetic comparative analysis using the OrnsteinUhlenbeck model ; Phylogenetic comparative analysis is an approach to inferring evolutionary process from a combination of phylogenetic and phenotypic data. The last few years have seen increasingly sophisticated models employed in the evaluation of more and more detailed evolutionary hypotheses, including adaptive hypotheses with multiple selective optima and hypotheses with rate variation within and across lineages. The statistical performance of these sophisticated models has received relatively little systematic attention, however. We conducted an extensive simulation study to quantify the statistical properties of a class of models toward the simpler end of the spectrum that model phenotypic evolution using OrnsteinUhlenbeck processes. We focused on identifying where, how, and why these methods break down so that users can apply them with greater understanding of their strengths and weaknesses. Our analysis identifies three key determinants of performance a discriminability ratio, a signaltonoise ratio, and the number of taxa sampled. Interestingly, we find that modelselection power can be high even in regions that were previously thought to be difficult, such as when tree size is small. On the other hand, we find that model parameters are in many circumstances difficult to estimate accurately, indicating a relative paucity of information in the data relative to these parameters. Nevertheless, we note that accurate model selection is often possible when parameters are only weakly identified. Our results have implications for more sophisticated methods inasmuch as the latter are generalizations of the case we study.
Symmetry breaking patterns of the 331 model at finite temperature ; We consider the minimal version of an extension of the standard electroweak model based on the SU3c times SU3L times U1X gauge symmetry the 331 model. We analyze the most general potential constructed from three scalars in the triplet representation of SU3L, whose neutral components develop nonzero vacuum expectation values, giving mass for all the model's massive particles. For different choices of parameters, we obtain the particle spectrum for the two symmetry breaking scales one where the SU3L times U1X group is broken down to SU2Ltimes U1Y and a lower scale similar to the standard model one. Within the considerations used, we show that the model encodes two firstorder phase transitions, respecting the pattern of symmetry restoration. The last transition, corresponding to the standard electroweak one, is found to be very weak firstorder, most likely turning secondorder or a crossover in practice. However, the first transition in this model can be strongly firstorder, which might happen at a temperature not too high above the second one. We determine the respective critical temperatures for symmetry restoration for the model.
On Complex Valued Convolutional Neural Networks ; Convolutional neural networks CNNs are the cutting edge model for supervised machine learning in computer vision. In recent years CNNs have outperformed traditional approaches in many computer vision tasks such as object detection, image classification and face recognition. CNNs are vulnerable to overfitting, and a lot of research focuses on finding regularization methods to overcome it. One approach is designing task specific models based on prior knowledge. Several works have shown that properties of natural images can be easily captured using complex numbers. Motivated by these works, we present a variation of the CNN model with complex valued input and weights. We construct the complex model as a generalization of the real model. Lack of order over the complex field raises several difficulties both in the definition and in the training of the network. We address these issues and suggest possible solutions. The resulting model is shown to be a restricted form of a real valued CNN with twice the parameters. It is sensitive to phase structure, and we suggest it serves as a regularized model for problems where such structure is important. This suggestion is verified empirically by comparing the performance of a complex and a real network in the problem of cell detection. The two networks achieve comparable results, and although the complex model is hard to train, it is significantly less vulnerable to overfitting. We also demonstrate that the complex network detects meaningful phase structure in the data.
A mechanical erosion model for twophase mass flows ; Erosion, entrainment and deposition are complex and dominant, but yet poorly understood, mechanical processes in geophysical mass flows. Here, we propose a novel, processbased, twophase, erosiondeposition model capable of adequately describing these complex phenomena commonly observed in landslides, avalanches, debris flows and bedload transport. The model is based on the jump in the momentum flux including changes of material and flow properties along the flowbed interface and enhances an existing general twophase mass flow model Pudasaini, 2012. A twophase variably saturated erodible basal morphology is introduced and allows for the evolution of erosiondepositiondepths, incorporating the inherent physical process including momentum and rheological changes of the flowing mixture. By rigorous derivation, we show that appropriate incorporation of the mass and momentum productions or losses in conservative model formulation is essential for the physically correct and mathematically consistent description of erosionentrainmentdeposition processes. We show that mechanically deposition is the reversed process of erosion. We derive mechanically consistent closures for coefficients emerging in the erosion rate models. We prove that effectively reduced friction in erosion is equivalent to the momentum production. With this, we solve the long standing dilemma of mass mobility, and show that erosion enhances the mass flow mobility. The novel enhanced real twophase model reveals some major aspects of the mechanics associated with erosion, entrainment and deposition. The model appropriately captures the emergence and propagation of complex frontal surge dynamics associated with the frontal ambientdrag with erosion.
Type II supernovae Early Light Curves ; Observations of type II supernova early light, from breakout until recombination, can be used to constrain the explosion energy and progenitor properties. Currently available for this purpose are purely analytic models, which are accurate only to within an order of magnitude, and detailed numerical simulations, which are more accurate but are applied to any event separately. In this paper we derive an analytic model that is calibrated by numerical simulations. This model is much more accurate than previous analytic models, yet it is as simple to use. To derive the model we analyze simulated light curves from numerical explosion of 124 red supergiant progenitors, calculated using the stellar evolution code MESA. We find that although the structure of the progenitors we consider varies, the resulting light curves can be described rather well based only on the explosion energy, ejecta mass and progenitor radius. Our calibrated analytic model, which is based on these three parameters, reproduces the bolometric luminosity within 2535 accuracy and the observed temperature within 15 accuracy compared to previous analytic models which are indeed found to be accurate only to within an order of magnitude. We also consider deviations of the early time spectrum from blackbody, and find that the RayleighJeans regime is slightly shallower roughly Lnu propto nu1.4. This modified spectrum affects the opticalnearUV light curve mostly during the first day when the typical observed temperature is gg 104 circK. We use our results to study the optical and nearUV early light curves from first light until recombination and briefly discuss what can be learned from current and future observations. Light curves generated using our calibrated model can be downloaded at httpwww.astro.tau.ac.iltomersh.
Interacting fermionic symmetryprotected topological phases in two dimensions ; We classify and construct models for twodimensional 2D interacting fermionic symmetryprotected topological FSPT phases with general finite Abelian unitary symmetry Gf. To obtain the classification, we couple the FSPT system to a dynamical discrete gauge field with gauge group Gf and study braiding statistics in the resulting gauge theory. Under reasonable assumptions, the braiding statistics data allows us to infer a potentially complete classification of 2D FSPT phases with Abelian symmetry. The FSPT models that we construct are simple stacks of the following two kinds of existing models i freefermion models and ii models obtained through embedding of bosonic symmetryprotected topological BSPT phases. Interestingly, using these two kinds of models, we are able to realize almost all FSPT phases in our classification, except for one class. We argue that this exceptional class of FSPT phases can never be realized through models i and ii, and therefore can be thought of as intrinsically interacting and intrinsically fermionic. The simplest example of this class is associated with mathbb Z4ftimesmathbb Z4timesmathbb Z4 symmetry. We show that all 2D FSPT phases with a finite Abelian symmetry of the form mathbb Z2ftimes G can be realized through the above models i, or ii, or a simple stack of them. Finally, we study the stability of BSPT phases when they are embedded into fermionic systems.
Optimal model order reduction with the SteiglitzMcBride method for openloop data ; In system identification, it is often difficult to find a physical intuition to choose a noise model structure. The importance of this choice is that, for the prediction error method PEM to provide asymptotically efficient estimates, the model orders must be chosen according to the true system. However, if only the plant estimates are of interest and the experiment is performed in open loop, the noise model may be overparameterized without affecting the asymptotic properties of the plant. The limitation is that, as PEM suffers in general from nonconvexity, estimating an unnecessarily large number of parameters will increase the chances of getting trapped in local minima. To avoid this, a high order ARX model can first be estimated by least squares, providing nonparametric estimates of the plant and noise model. Then, model order reduction can be used to obtain a parametric model of the plant only. We review existing methods to perform this, pointing out limitations and connections between them. Then, we propose a method that connects favorable properties from the previously reviewed approaches. We show that the proposed method provides asymptotically efficient estimates of the plant with open loop data. Finally, we perform a simulation study, which suggests that the proposed method is competitive with PEM and other similar methods.
Nested Markov Properties for Acyclic Directed Mixed Graphs ; Conditional independence models associated with directed acyclic graphs DAGs may be characterized in at least three different ways via a factorization, the global Markov property given by the dseparation criterion, and the local Markov property. Marginals of DAG models also imply equality constraints that are not conditional independences; the wellknown Verma constraint'' is an example. Constraints of this type are used for testing edges, and in a computationally efficient marginalization scheme via variable elimination. We show that equality constraints like the Verma constraint'' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via ordered local and global Markov properties, and a factorization, a graphical model associated with acyclic directed mixed graphs ADMGs. We prove that marginal distributions of DAG models lie in this model, and that a set of these constraints given by Tian provides an alternative definition of the model. Finally, we show that the fixing operation used to define the model leads to a particularly simple characterization of identifiable causal effects in hidden variable causal DAG models.
Density Estimation Techniques for Multiscale Coupling of Kinetic Models of the Plasma Material Interface ; In this work we analyze two classes of DensityEstimation techniques which can be used to consistently couple different kinetic models of the plasmamaterial interface, intended as the region of plasma immediately interacting with the first surface layers of a material wall. In particular, we handle the general problem of interfacing a continuum multispecies VlasovPoissonBGK plasma model to discrete surface erosion models. The continuum model solves for the energyangle distributions of the particles striking the surface, which are then driving the surface response. A modification to the classical BinaryCollision Approximation BCA method is here utilized as a prototype discrete model of the surface, to provide boundary conditions and impurity distributions representative of the material behavior during plasma irradiation. The numerical tests revealed the superior convergence properties of Kernel Density Estimation methods over Gaussian Mixture Models, with EpanechnikovKDEs being up to two orders of magnitude faster than GaussianKDEs. The methodology here presented allows a selfconsistent treatment of the plasmamaterial interface in magnetic fusion devices, including both the nearsurface plasma plasma sheath and presheath in magnetized conditions, and surface effects such as sputtering, backscattering, and ion implantation. The same coupling techniques can also be utilized for other discrete material models such as Molecular Dynamics.
A Spherical Probability Distribution Model of the UserInduced Mobile Phone Orientation ; This paper presents a statistical modeling approach of the reallife userinduced randomness due to mobile phone orientations for different phone usage types. As wellknown, the radiated performance of a wireless device depends on its orientation and position relative to the user. Therefore, realistic handset usage models will lead to more accurate OverTheAir characterization measurements for antennas and wireless devices in general. We introduce a phone usage classification based on the network access modes, e.g., voice circuit switched or nonvoice packet switched services, and the use of accessories such as wired or Bluetooth handsets, or a speakerphone during the network access session. The random phone orientation is then modelled by the spherical von MisesFisher distribution for each of the identified phone usage types. A finite mixture model based on the individual probability distribution functions and heuristic weights is also presented. The models are based on data collected from builtin accelerometer measurements. Our approach offers a straightforward modeling of the userinduced random orientation for different phone usage types. The models can be used in the design of better handsets and antenna systems as well as for the design and optimization of wireless networks.
Nonlinear statespace modelling of the kinematics of an oscillating circular cylinder in a fluid flow ; The flowinduced vibration of bluff bodies is an important problem of many marine, civil, or mechanical engineers. In the design phase of such structures, it is vital to obtain good predictions of the fluid forces acting on the structure. Current methods rely on computational fluid dynamic simulations CFD, with a too high computational cost to be effectively used in the design phase or for control applications. Alternative methods use heuristic mathematical models of the fluid forces, but these lack the accuracy they often assume the system to be linear or flexibility to be useful over a wide operating range. In this work we show that it is possible to build an accurate, flexible and lowcomputationalcost mathematical model using nonlinear system identification techniques. This model is data driven it is trained over a userdefined region of interest using data obtained from experiments or simulations, or both. Here we use a Van der Pol oscillator as well as CFD simulations of an oscillating circular cylinder to generate the training data. Then a discretetime polynomial nonlinear statespace model is fit to the data. This model relates the oscillation of the cylinder to the force that the fluid exerts on the cylinder. The model is finally validated over a wide range of oscillation frequencies and amplitudes, both inside and outside the socalled lockin region. We show that forces simulated by the model are in good agreement with the data obtained from CFD.
The Structure of Models of Secondorder Set Theories ; This dissertation is a contribution to the project of secondorder set theory, which has seen a revival in recent years. The approach is to understand secondorder set theory by studying the structure of models of secondorder set theories. The main results are the following, organized by chapter. First, I investigate the poset of Trealizations of a fixed countable model of mathsfZFC, where T is a reasonable secondorder set theory such as mathsfGBC or mathsfKM, showing that it has a rich structure. In particular, every countable partial order embeds into this structure. Moreover, we can arrange so that these embedding preserve the existencenonexistence of upper bounds, at least for finite partial orders. Second I generalize some constructions of Marek and Mostowski from mathsfKM to weaker theories. They showed that every model of mathsfKM plus the Class Collection schema unrolls to a model of mathsfZFC with a largest cardinal. I calculate the theories of the unrolling for a variety of secondorder set theories, going as weak as mathsfGBC mathsfETR. I also show that being Trealizable goes down to submodels for a broad selection of secondorder set theories T. Third, I show that there is a hierarchy of transfinite recursion principles ranging in strength from mathsfGBC to mathsfKM. This hierarchy is ordered first by the complexity of the properties allowed in the recursions and second by the allowed heights of the recursions. Fourth, I investigate the question of which secondorder set theories have least models. I show that strong theoriessuch as mathsfKM or Pi11textmathsfCAdo not have least transitive models while weaker theoriesfrom mathsfGBC to mathsfGBC mathsfETRmathrmOrddo have least transitive models.
Unified representation of the C3, C4, and CAM photosynthetic pathways with the Photo3 model ; Recently, interest in crassulacean acid metabolism CAM photosynthesis has risen and new, physiologically based CAM models have emerged. These models show promise, yet unlike the more widely used physiological models of C3 and C4 photosynthesis, their complexity has thus far inhibited their adoption in the general community. Indeed, most efforts to assess the potential of CAM still rely on empirically based environmental productivity indices, which makes uniform comparisons between CAM and nonCAM species difficult. In order to represent C3, C4, and CAM photosynthesis in a consistent, physiologically based manner, we introduce the Photo3 model. This work builds on a common photosynthetic and hydraulic core and adds additional components to depict the circadian rhythm of CAM photosynthesis and the carbonconcentrating mechanism of C4 photosynthesis. This allows consistent comparisons of the three photosynthetic types for the first time. It also allows the representation of intermediate C3CAM behavior through the adjustment of a single model parameter. Model simulations of Opuntia ficusindica CAM, Sorghum bicolor C4, and Triticum aestivum C3 capture the diurnal behavior of each species as well as the cumulative effects of longterm water limitation. The results show potential for use in understanding CAM productivity, ecology, and climate feedbacks and in evaluating the tradeoffs between C3, C4, and CAM photosynthesis.
Stability of HDE model with signchangeable interaction in BransDicke theory ; We consider the BransDicke BD theory of gravity and explore the cosmological implications of the signchangeable interacting holographic dark energy HDE model in the background of FriedmannRobertsonWalker FRW universe. As the system's infrared IR cutoff, we choose the future event horizon, the GrandaOliveros GO and the Ricci cutoffs. For each cutoff, we obtain the density parameter, the equation of state EoS and the deceleration parameter of the system. In case of future event horizon, we find out that the EoS parameter, wD, can cross the phantom line, as a result the transition from deceleration to acceleration expansion of the universe can be achieved provided the model parameters are chosen suitably. Then, we investigate the instability of the signchangeable interacting HDE model against perturbations in BD theory. For this purpose, we study the squared sound speed vs2 whose sign determines the stability of the model. When vs20 the model is unstable against perturbation. For future event horizon cutoff, our universe can be stable v2s0 depending on the model parameters. Then, we focus on GO and Ricci cutoffs and find out that although other features of these two cutoffs seem to be consistent with observations, they cannot leads to stable dominated universe, except in special case with GO cutoff. Our studies confirm that for the signchangeable HDE model in the setup of BD cosmology, the event horizon is the most suitable horizon which can passes all conditions and leads to a stable DE dominated universe.
A Generalized Approximate Control Variate Framework for Multifidelity Uncertainty Quantification ; We describe and analyze a variance reduction approach for Monte Carlo MC sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models which are typically lower fidelity with unknown statistics are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multimodel variance reduction schemes as special cases. We demonstrate that these existing strategies use recursive sampling strategies, and as a result, their maximum possible variance reduction is limited to that of a control variate algorithm that uses only a single lowfidelity model with known mean. This theoretical result holds regardless of the number of lowfidelity models andor samples used to build the estimator. We then derive new sampling strategies within our framework that circumvent this limitation to make efficient use of all available information sources. In particular, we demonstrate that a significant gap can exist, of orders of magnitude in some cases, between the variance reduction achievable by using a single lowfidelity model and our nonrecursive approach. We also present initial sample allocation approaches for exploiting this gap. They yield the greatest benefit when augmenting the highfidelity model evaluations is impractical because, for instance, they arise from a legacy database. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.
A Solution SetBased Entropy Principle for Constitutive Modeling in Mechanics ; Entropy principles based on thermodynamic consistency requirements are widely used for constitutive modeling in continuum mechanics, providing physical constraints on a priori unknown constitutive functions. The wellknown MullerLiu procedure is based on Liu's lemma for linear systems. While the MullerLiu algorithm works well for basic models with simple constitutive dependencies, it cannot take into account nonlinear relationships that exist between higher derivatives of the fields in the cases of more complex constitutive dependencies. The current contribution presents a general solution setbased procedure, which, for a model system of differential equations, respects the geometry of the solution manifold, and yields a set of constraint equations on the unknown constitutive functions, which are necessary and sufficient conditions for the entropy production to stay nonnegative for any solution. Similarly to the MullerLiu procedure, the solution set approach is algorithmic, its output being a set of constraint equations and a residual entropy inequality. The solution set method is applicable to virtually any physical model, allows for arbitrary initially postulated forms of the constitutive dependencies, and does not use artificial constructs like Lagrange multipliers. A Maple implementation makes the solution set method computationally straightforward and useful for the constitutive modeling of complex systems. Several computational examples are considered, in particular, models of gas, anisotropic fluid, and granular flow dynamics. The resulting constitutive function forms are analyzed, and comparisons are provided. It is shown how the solution set entropy principle can yield classification problems, leading to several complementary sets of admissible constitutive functions; such problems have not previously appeared in the constitutive modeling literature.
M2ETry On Net Fashion from Model to Everyone ; Most existing virtual tryon applications require clean clothes images. Instead, we present a novel virtual TryOn network, M2ETry On Net, which transfers the clothes from a model image to a person image without the need of any clean product images. To obtain a realistic image of person wearing the desired model clothes, we aim to solve the following challenges 1 nonrigid nature of clothes we need to align poses between the model and the user; 2 richness in textures of fashion items preserving the fine details and characteristics of the clothes is critical for photorealistic transfer; 3 variation of identity appearances it is required to fit the desired model clothes to the person identity seamlessly. To tackle these challenges, we introduce three key components, including the pose alignment network PAN, the texture refinement network TRN and the fitting network FTN. Since it is unlikely to gather image pairs of input person image and desired output image i.e. person wearing the desired clothes, our framework is trained in a selfsupervised manner to gradually transfer the poses and textures of the model's clothes to the desired appearance. In the experiments, we verify on the Deep Fashion dataset and MVC dataset that our method can generate photorealistic images for the person to tryon the model clothes. Furthermore, we explore the model capability for different fashion items, including both upper and lower garments.
A machine learning approach to thermal conductivity modeling A case study on irradiated uraniummolybdenum nuclear fuels ; A deep neural network was developed for the purpose of predicting thermal conductivity with a case study performed on neutron irradiated nuclear fuel. Traditional thermal conductivity modeling approaches rely on existing theoretical frameworks that describe known, relevant phenomena that govern the microstructural evolution processes during neutron irradiation such as recrystallization, and pore size, distribution and morphology. Current empirical modeling approaches, however, do not represent all irradiation test data well. Here, we develop a machine learning approach to thermal conductivity modeling that does not require a priori knowledge of a specific material microstructure and system of interest. Our approach allows researchers to probe dependency of thermal conductivity on a variety of reactor operating and material conditions. The purpose of building such a model is to allow for improved predictive capabilities linking structurepropertyprocessingperformance relationships in the system of interest here, irradiated nuclear fuel, which could lead to improved experimental test planning and characterization. The uraniummolybdenum system is the fuel system studied in this work, and historic irradiation test data is leveraged for model development. Our model achieved a mean absolute percent error of approximately 4 for the validation data set when a leaveoneout cross validation approach was applied. Results indicate our model generalizes well to never before seen data, and thus use of deep learning methods for material property predictions from limited, historic irradiation test data is a viable approach.
A Framework for Evaluating ModelDriven Selfadaptive Software Systems ; In the last few years, Model Driven Development MDD, Componentbased Software Development CBSD, and contextoriented software have become interesting alternatives for the design and construction of selfadaptive software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of these technologies shows them all to include the principle of the separation of concerns, and their further integration is a key factor to obtaining highquality and selfadaptable software systems. Each technology identifies different concerns and deals with them separately in order to specify the design of the selfadaptive applications, and, at the same time, support software with adaptability and contextawareness. This research studies the development methodologies that employ the principles of modeldriven development in building selfadaptive software systems. To this aim, this article proposes an evaluation framework for analysing and evaluating the features of modeldriven approaches and their ability to support software with selfadaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building selfadaptive software systems. This study highlights the major drawbacks of the propped modeldriven approaches in the related works, and emphasise on considering the volatile aspects of selfadaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modelling languages and modelling tools to the software developers.
Statistical Models for the Number of Successful Cyber Intrusions ; We propose several generalized linear models GLMs to predict the number of successful cyber intrusions or intrusions into an organization's computer network, where the rate at which intrusions occur is a function of the following observable characteristics of the organization i domain name server DNS traffic classified by their toplevel domains TLDs; ii the number of network security policy violations; and iii a set of predictors that we collectively call cyber footprint that is comprised of the number of hosts on the organization's network, the organization's similarity to educational institution behavior SEIB, and its number of records on scholar.google.com ROSG. In addition, we evaluate the number of intrusions to determine whether these events follow a Poisson or negative binomial NB probability distribution. We reveal that the NB GLM provides the best fit model for the observed count data, number of intrusions per organization, because the NB model allows the variance of the count data to exceed the mean. We also show that there are restricted and simpler NB regression models that omit selected predictors and improve the goodnessoffit of the NB GLM for the observed data. With our model simulations, we identify certain TLDs in the DNS traffic as having significant impact on the number of intrusions. In addition, we use the models and regression results to conclude that the number of network security policy violations are consistently predictive of the number of intrusions.
Fast algorithms at low temperatures via Markov chains ; We define a discretetime Markov chain for abstract polymer models and show that under sufficient decay of the polymer weights, this chain mixes rapidly. We apply this Markov chain to polymer models derived from the hardcore and ferromagnetic Potts models on boundeddegree bipartite expander graphs. In this setting, Jenssen, Keevash and Perkins 2019 recently gave an FPTAS and an efficient sampling algorithm at sufficiently high fugacity and low temperature respectively. Their method is based on using the cluster expansion to obtain a complex zerofree region for the partition function of a polymer model, and then approximating this partition function using the polynomial interpolation method of Barvinok. Our approach via the polymer model Markov chain circumvents the zerofree analysis and the generalization to complex parameters, and leads to a sampling algorithm with a fast running time of On log n for the Potts model and On2 log n for the hardcore model, in contrast to typical running times of nOlog Delta for algorithms based on Barvinok's polynomial interpolation method on graphs of maximum degree Delta. We finally combine our results for the hardcore and ferromagnetic Potts models with standard Markov chain comparison tools to obtain polynomial mixing time for the usual spin Glauber dynamics restricted to even and odd or red' dominant portions of the respective state spaces.
Evolution of Oscillating Scalar Fields as Dark Energy ; Oscillating scalar fields, with an oscillation frequency much greater than the expansion rate, have been proposed as models for dark energy. We examine these models, with particular emphasis on the evolution of the ratio of the oscillation frequency to the expansion rate. We show that this ratio always increases with time if the dark energy density declines less rapidly than the background energy density. This allows us to classify oscillating dark energy models in terms of the epoch at which the oscillation frequency exceeds the expansion rate, which is effectively the time at which rapid oscillations begin. There are three basic types of behavior early oscillation models, in which oscillations begin during the matterdominated era, late oscillation models, in which oscillations begin after scalarfield domination, and nonoscillating models. We examine a representative set of models those with powerlaw potentials and determine the parameter range giving acceptable agreement with the supernova observations. We show that a subset of all three classes of models can be consistent with the observational data.
A Consistent Model of Explosive' Financial Bubbles With MeanReversing Residuals ; We present a selfconsistent model for explosive financial bubbles, which combines a meanreverting volatility process and a stochastic conditional return which reflects nonlinear positive feedbacks and continuous updates of the investors' beliefs and sentiments. The conditional expected returns exhibit fasterthanexponential acceleration decorated by accelerating oscillations, called logperiodic power law. Tests on residuals show a remarkable low rate 0.2 of false positives when applied to a GARCH benchmark. When tested on the SP500 US index from Jan. 3, 1950 to Nov. 21, 2008, the model correctly identifies the bubbles ending in Oct. 1987, in Oct. 1997, in Aug. 1998 and the ITC bubble ending on the first quarter of 2000. Different unitroot tests confirm the high relevance of the model specification. Our model also provides a diagnostic for the duration of bubbles applied to the period before Oct. 1987 crash, there is clear evidence that the bubble started at least 4 years earlier. We confirm the validity and universality of the volatilityconfined LPPL model on seven other major bubbles that have occurred in the World in the last two decades. Using Bayesian inference, we find a very strong statistical preference for our model compared with a standard benchmark, in contradiction with Chang and Feigenbaum 2006 which used a unitroot model for residuals.
Magnetization Plateaux in the Antiferromagnetic Ising Chain with SingleIon Anisotropy ; Two onedimensional spin1 antiferromagnetic Ising models with a singleion anisotropy under external magnetic field at low temperatures are exactly investigated by the transfermatrix technique. The magnetization per spin m is obtained for the two types of models denoted by model 1 and 2 as an explicit function of the magnetic field H and of the anisotropy parameter D. Model 1 is an extension of the recently one treated by Ohanyan and Ananikian emphPhys. Lett. A textbf 307 2003 76 we have generalized their model to the spin1 case and a singleion anisotropy term have been included. In the limit of positive or null anisotropy Dgeq 0 and strong antiferromagnetic coupling alpha JAJFgeq 3 the m times H curves are qualitatively the same as for the spin S12 case, with the presence of only one plateau at mmsat13. On the other hand, for negative anisotropy D0 we observe more plateaux m16 and 23, which depend on the values of D and alpha . The second model model 2 is the same as the one recently studied by Chen et al. emphJ. Mag. Mag. Mat. textbf262 2003 258 using Monte Carlo simulation; here, the model is treated within an exact transfermatrix framework.
Contiguous redshift parameterizations of the growth index ; The growth rate of matter perturbations can be used to distinguish between different gravity theories and to distinguish between dark energy and modified gravity at cosmological scales as an explanation to the observed cosmic acceleration. We suggest here parameterizations of the growth index as functions of the redshift. The first one is given by gammaatildegammaa frac11attcagammaearly frac11aattc that interpolates between a lowintermediate redshift parameterization tildegammaagammalatea gamma0 1a gammaa and a high redshift gammaearly constant value. For example, our interpolated form gammaa can be used when including the CMB to the rest of the data while the form gammalatea can be used otherwise. It is found that the parameterizations proposed achieve a fit that is better than 0.004 for the growth rate in a LambdaCDM model, better than 0.014 for QuintessenceColdDarkMatter QCDM models, and better than 0.04 for the flat DvaliGabadadzePorrati DGP model with Omegam00.27 for the entire redshift range up to zCMB. We find that the growth index parameters gamma0,gammaa take distinctive values for dark energy models and modified gravity models, e.g. 0.5655,0.02718 for the LambdaCDM model and 0.6418,0.06261 for the flat DGP model. This provides a means for future observational data to distinguish between the models.
Actuator line modeling of verticalaxis turbines ; To bridge the gap between high and low fidelity numerical modeling tools for verticalaxis or crossflow turbines VATs or CFTs, an actuator line model ALM was developed and validated for both a high and a medium solidity verticalaxis turbine at rotor diameter Reynolds numbers ReD sim 106. The ALM is a combination of classical blade element theory and NavierStokes based flow models, and in this study both kepsilon Reynoldsaveraged NavierStokes RANS and Smagorinsky large eddy simulation LES turbulence models were tested using the opensource OpenFOAM computational fluid dynamics framework. The RANS models were able to be run on coarse grids while still providing good convergence behavior in terms of the mean power coefficient, and also approximately four orders of magnitude reduction in computational expense compared with 3D bladeresolved RANS simulations. Submodels for dynamic stall, end effects, added mass, and flow curvature were implemented, resulting in reasonable performance predictions for the high solidity rotor, more discrepancies for the medium solidity rotor, and overprediction for both cases at high tip speed ratio. The wake results showed that the ALM was able to capture some of the important flow features that contribute to VAT's relatively fast wake recoverya large improvement over the conventional actuator disk model. The mean flow field was better realized with the LES, which still represented a computational savings of two orders of magnitude compared with 3D bladeresolved RANS, though vortex breakdown and subsequent turbulence generation appeared to be underpredicted, which necessitates further investigation of optimal subgrid scale modeling.
The relationships between message passing, pairwise, KermackMcKendrick and stochastic SIR epidemic models ; We consider a very general stochastic model for an SIR epidemic on a network which allows an individual's infectious period, and the time it takes to contact each of its neighbours after becoming infected, to be correlated. We write down the message passing system of equations for this model and prove, for the first time, that it has a unique feasible solution. We also generalise an earlier result by proving that this solution provides a rigorous upper bound for the expected epidemic size cumulative number of infection events at any fixed time t0. We specialise these results to a homogeneous special case where the graph network is symmetric. The message passing system here reduces to just four equations. We prove that cycles in the network inhibit the spread of infection, and derive important epidemiological results concerning the final epidemic size and threshold behaviour for a major outbreak. For Poisson contact processes, this message passing system is equivalent to a nonMarkovian pair approximation model, which we show has wellknown pairwise models as special cases. We show further that a sequence of message passing systems, starting with the homogeneous one just described, converges to the deterministic KermackMcKendrick equations for this stochastic model. For Poisson contact and recovery, we show that this convergence is monotone, from which it follows that the message passing system and hence also the pairwise model here provides a better approximation to the expected epidemic size at time t0 than the KermackMcKendrick model.
Scale invariant cosmology III dynamical models and comparisons with observations ; We examine the properties of the scale invariant cosmological models, also making the specific hypothesis of the scale invariance of the empty space at large scales. Numerical integrations of the cosmological equations for different values of the curvature parameter k and of the density parameter Omegam are performed. We compare the dynamical properties of the models to the observations at different epochs. The main numerical data and graphical representations are given for models computed with different curvatures and density parameters. The models with nonzero density start explosively with first a braking phase followed by a continuously accelerating expansion. The comparison of the models with the recent observations from supernovae SN Ia, BAO and CMB data from Planck 2015 shows that the scale invariant model with k0 and Omegam0.30 very well fits the observations in the usual Omegam vs. OmegaLambda plane and consistently accounts for the accelerating expansion or dark energy. The expansion history is compared to observations in the plot Hz vs. redshift z, the parameters q0 is also examined, as well the recent data about the redshift ztrans of the transition between braking and acceleration. These dynamical tests are fully satisfied by the scale invariant models. The past evolution of matter and radiation density is studied, it shows small differences with respect to the standard case. These first comparisons are encouraging further investigations on scale invariant cosmology with the assumption of scale invariance of the empty space at large scales.
Simplified DM models with the full SM gauge symmetry the case of tchannel colored scalar mediators ; The general strategy for dark matter DM searches at colliders currently relies on simplified models. In this paper, we propose a new tchannel UVcomplete simplified model that improves the existing simplified DM models in two important respects i we impose the full SM gauge symmetry including the fact that the lefthanded and the righthanded fermions have two independent mediators with two independent couplings, and ii we include the renormalization group evolution when we derive the effective Lagrangian for DMnucleon scattering from the underlying UV complete models by integrating out the tchannel mediators. The first improvement will introduce a few more new parameters compared with the existing simplified DM models. In this study we look at the effect this broader set of free parameters has on direct detection and the monoX MET Xjet,W,Z signatures at 13 TeV LHC while maintaining gauge invariance of the simplified model under the full SM gauge group. We find that the direct detection constraints require DM masses less than 10 GeV in order to produce phenomenologically interesting collider signatures. Additionally, for a fixed monoW cross section it is possible to see very large differences in the monojet cross section when the usual simplified model assumptions are loosened and isospin violation between RH and LH DMSM quark couplings are allowed.
Predictive CoarseGraining ; We propose a datadriven, coarsegraining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a finetocoarse map, we adopt the opposite strategy by prescribing a probabilistic coarsetofine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale allatom data. From an informationtheoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the CG process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of finescale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarsegraining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarsegrained model. A flexible and parallelizable Monte Carlo ExpectationMaximization MCEM scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPCE water model.
The Betelgeuse Project Constraints from Rotation ; In order to constrain the evolutionary state of the red supergiant Betelgeuse, we have produced a suite of models with ZAMS masses from 15 to 25 Msun in intervals of 1 Msun including the effects of rotation. The models were computed with the stellar evolutionary code MESA. For nonrotating models we find results that are similar to other work. It is somewhat difficult to find models that agree within 1 sigma of the observed values of R, Teff and L, but modestly easy within 3 sigma uncertainty. Incorporating the nominal observed rotational velocity, 15 kms, yields significantly different, and challenging, constraints. This velocity constraint is only matched when the models first approach the base of the red supergiant branch RSB, having crossed the Hertzsprung gap, but not yet having ascended the RSB and most violate even generous error bars on R, Teff and L. Models at the tip of the RSB typically rotate at only 0.1 kms, independent of any reasonable choice of initial rotation. We discuss the possible uncertainties in our modeling and the observations, including the distance to Betelgeuse, the rotation velocity, and model parameters. We summarize various options to account for the rotational velocity and suggest that one possibility is that Betelgeuse merged with a companion star of about 1 Msun as it ascended the RSB, in the process producing the ring structure observed at about 7' away. A past coalescence would complicate attempts to understand the evolutionary history and future of Betelgeuse.
Less is More Learning Prominent and Diverse Topics for Data Summarization ; Statistical topic models efficiently facilitate the exploration of largescale data sets. Many models have been developed and broadly used to summarize the semantic structure in news, science, social media, and digital humanities. However, a common and practical objective in data exploration tasks is not to enumerate all existing topics, but to quickly extract representative ones that broadly cover the content of the corpus, i.e., a few topics that serve as a good summary of the data. Most existing topic models fit exactly the same number of topics as a user specifies, which have imposed an unnecessary burden to the users who have limited prior knowledge. We instead propose new models that are able to learn fewer but more representative topics for the purpose of data summarization. We propose a reinforced random walk that allows prominent topics to absorb tokens from similar and smaller topics, thus enhances the diversity among the top topics extracted. With this reinforced random walk as a general process embedded in classical topic models, we obtain textitdiverse topic models that are able to extract the most prominent and diverse topics from data. The inference procedures of these diverse topic models remain as simple and efficient as the classical models. Experimental results demonstrate that the diverse topic models not only discover topics that better summarize the data, but also require minimal prior knowledge of the users.
Semantic speech retrieval with a visually grounded model of untranscribed speech ; There is growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for lowresource speech processing, robotics, and human language acquisition research. Here we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to semantic keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60 on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving nonverbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.
Improving optimal control of gridconnected lithiumion batteries through more accurate battery and degradation modelling ; The increased deployment of intermittent renewable energy generators opens up opportunities for gridconnected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most technoeconomic analyses do not account for this. For the first time, this paper quantifies the annual benefits of gridconnected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithiumion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a largescale experimental degradation data set Mat4Bat. A respective improvement in RMS capacity prediction error from 11 to 5 is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175 improvement compared with the simplest model. This illustrates that using a simplistic battery model in a technoeconomic assessment of gridconnected batteries might substantially underestimate the business case and lead to erroneous conclusions.
Causal Rule Sets for Identifying Subgroups with Enhanced Treatment Effect ; A key question in causal inference analyses is how to find subgroups with elevated treatment effects. This paper takes a machine learning approach and introduces a generative model, Causal Rule Sets CRS, for interpretable subgroup discovery. A CRS model uses a small set of short decision rules to capture a subgroup where the average treatment effect is elevated. We present a Bayesian framework for learning a causal rule set. The Bayesian model consists of a prior that favors simple models for better interpretability as well as avoiding overfitting, and a Bayesian logistic regression that captures the likelihood of data, characterizing the relation between outcomes, attributes, and subgroup membership. The Bayesian model has tunable parameters that can characterize subgroups with various sizes, providing users with more flexible choices of models from the emphtreatment efficient frontier. We find maximum a posteriori models using iterative discrete Monte Carlo steps in the joint solution space of rules sets and parameters. To improve search efficiency, we provide theoretically grounded heuristics and bounding strategies to prune and confine the search space. Experiments show that the search algorithm can efficiently recover true underlying subgroups. We apply CRS on public and realworld datasets from domains where interpretability is indispensable. We compare CRS with stateoftheart rulebased subgroup discovery models. Results show that CRS achieved consistently competitive performance on datasets from various domains, represented by high treatment efficient frontiers.
Multiresolution Coupled Vertical Equilibrium Model for Fast Flexible Simulation of CO2 Storage ; CO2 capture and storage is an important technology for mitigating climate change. Design of efficient strategies for safe, longterm storage requires the capability to efficiently simulate processes taking place on very different temporal and spatial scales. The physical laws describing CO2 storage are the same as for hydrocarbon recovery, but the characteristic spatial and temporal scales are quite different. Petroleum reservoirs seldom extend more than tens of kilometers and have operational horizons spanning decades. Injected CO2 needs to be safely contained for hundreds or thousands of years, during which it can migrate hundreds or thousands of kilometers. Because of the vast scales involved, conventional 3D reservoir simulation quickly becomes computationally unfeasible. Large density difference between injected CO2 and resident brine means that vertical segregation will take place relatively quickly, and depthintegrated models assuming vertical equilibrium VE often represents a better strategy to simulate longterm migration of CO2 in largescale aquifer systems. VE models have primarily been formulated for relatively simple rock formations and have not been coupled to 3D simulation in a uniform way. In particular, known VE simulations have not been applied to models of realistic geology in which many flow compartments may exist inbetween impermeable layers. In this paper, we generalize the concept of VE models, formulated in terms of wellproven reservoir simulation technology, to complex aquifer systems with multiple layers and regions. We also introduce novel formulations for multilayered VE models by use of both direct spill and diffuse leakage between individual layers. This new layered 3D model is then coupled to a stateoftheart, 3D blackoil type model.
Software Engineering Modeling Applied to English Verb Classification and Poetry ; In requirements specification, software engineers create a textual description of the envisioned system as well as develop conceptual models using such tools as Universal Modeling Language UML and System Modeling Language SysML. One such tool, called FM, has recently been developed as an extension of the INPUTPROCESSOUTPUT IPO model. IPO has been used extensively in many interdisciplinary applications and is described as one of the most fundamental and important of all descriptive tools. This paper is an attempt to understanding the PROCESS in IPO. The fundamental way to describe PROCESS is in verbs. This use of language has an important implication for systems modeling since verbs express the vast range of actions and movements of all things. It is clear that modeling needs to examine verbs. Accordingly, this paper involves a study of English verbs as a bridge to learn about processes, not as linguistic analysis but rather to reveal the semantics of processes, particularly the five verbs that form the basis of FM states create, process, receive, release, and transfer. The paper focuses on verb classification, and specifically on how to model the action of verbs diagrammatically. From the linguistics point of view, according to some researchers, further exploration of the notion of verb classes is needed for realworld tasks such as machine translation, language generation, and document classification. Accordingly, this nonlinguistics study may benefit linguistics.
SequencetoSequence ASR Optimization via Reinforcement Learning ; Despite the success of sequencetosequence approaches in automatic speech recognition ASR systems, the models still suffer from several problems, mainly due to the mismatch between the training and inference conditions. In the sequencetosequence architecture, the model is trained to predict the grapheme of the current timestep given the input of speech signal and the groundtruth grapheme history of the previous timesteps. However, it remains unclear how well the model approximates realworld speech during inference. Thus, generating the whole transcription from scratch based on previous predictions is complicated and errors can propagate over time. Furthermore, the model is optimized to maximize the likelihood of training data instead of error rate evaluation metrics that actually quantify recognition quality. This paper presents an alternative strategy for training sequencetosequence ASR models by adopting the idea of reinforcement learning RL. Unlike the standard training scheme with maximum likelihood estimation, our proposed approach utilizes the policy gradient algorithm. We can 1 sample the whole transcription based on the model's prediction in the training process and 2 directly optimize the model with negative Levenshtein distance as the reward. Experimental results demonstrate that we significantly improved the performance compared to a model trained only with maximum likelihood estimation.
Spectroscopic properties of a twodimensional timedependent Cepheid model II. Determination of stellar parameters and abundances ; Standard spectroscopic analyses of variable stars are based on hydrostatic onedimensional model atmospheres. This quasistatic approach has theoretically not been validated. We aim at investigating the validity of the quasistatic approximation for Cepheid variables. We focus on the spectroscopic determination of the effective temperature Tmathrmeff, surface gravity log ,g, microturbulent velocity ximathrmt, and a generic metal abundance log,A here taken as iron. We calculate a grid of 1D hydrostatic planeparallel models covering the ranges in effective temperature and gravity encountered during the evolution of a twodimensional timedependent envelope model of a Cepheid computed with the radiationhydrodynamics code CO5BOLD. We perform 1D spectral syntheses for artificial iron lines in local thermodynamic equilibrium varying the microturbulent velocity and abundance. We fit the resulting equivalent widths to corresponding values obtained from our dynamical model. For the fourparametric case, the stellar parameters are typically underestimated exhibiting a bias in the iron abundance of approx0.2,mboxdex. To avoid biases of this kind it is favourable to restrict the spectroscopic analysis to photometric phases phimathrmphapprox0.3ldots 0.65 using additional information to fix effective temperature and surface gravity. Hydrostatic 1D model atmospheres can provide unbiased estimates of stellar parameters and abundances of Cepheid variables for particular phases of their pulsations. We identified convective inhomogeneities as the main driver behind potential biases. For obtaining a complete view on the effects when determining stellar parameters with 1D models, multidimensional Cepheid atmosphere models are necessary for variables of longer period than investigated here.
Local Clustering Coefficient of Spatial Preferential Attachment Model ; In this paper, we study the clustering properties of the Spatial Preferential Attachment SPA model. This model naturally combines geometry and preferential attachment using the notion of spheres of influence. It was previously shown in several research papers that graphs generated by the SPA model are similar to realworld networks in many aspects. Also, this model was successfully used for several practical applications. However, the clustering properties of the SPA model were not fully analyzed. The clustering coefficient is an important characteristic of complex networks which is tightly connected with its community structure. In the current paper, we study the behaviour of Cd, which is the average local clustering coefficient for the vertices of degree d. It was empirically shown that in realworld networks Cd usually decreases as 1da for some a0 and it was often observed that a1. We prove that in the SPA model Cd decreases as 1d. Furthermore, we are also able to prove that not only the average but the individual local clustering coefficient of a vertex v of degree d behaves as 1d if d is large enough. The obtained results further confirm the suitability of the SPA model for fitting various realworld complex networks.
First study of reionization in the Planck 2015 normalized closed CDM inflation model ; We study reionization in two nonflat LambdaCDM inflation models that best fit the Planck 2015 cosmic microwave background anisotropy observations, ignoring or in conjunction with baryon acoustic oscillation distance measurements. We implement a principal component analysis PCA to estimate the uncertainties in the reionization history from a joint quasarCMB dataset. A thorough Markov Chain Monte Carlo analysis is done over the parameter space of PCA modes for both nonflat LambdaCDM inflation models as well as the original Planck 2016 tilted, spatiallyflat LambdaCDM inflation model. Although both flat and nonflat models can closely match the lowredshift zlesssim6 observations, we notice a possible tension between highredshift zsim8 Lymanalpha emitter data and the nonflat models. This is solely due to the fact that the closed models have a relatively higher reionization optical depth compared to the flat one, which in turn demands more highredshift ionizing sources and favors an extended reionization starting as early as zapprox14. We conclude that as opposed to flatcosmology, for the nonflat cosmology models i the escape fraction needs steep redshift evolution and even unrealistically high values at some redshifts and ii most of the physical parameters require to have nonmonotonic redshift evolution, especially apparent when Lymanalpha emitter data is included in the analysis.
Towards Personalized Modeling of the Female Hormonal Cycle Experiments with Mechanistic Models and Gaussian Processes ; In this paper, we introduce a novel task for machine learning in healthcare, namely personalized modeling of the female hormonal cycle. The motivation for this work is to model the hormonal cycle and predict its phases in time, both for healthy individuals and for those with disorders of the reproductive system. Because there are individual differences in the menstrual cycle, we are particularly interested in personalized models that can account for individual idiosyncracies, towards identifying phenotypes of menstrual cycles. As a first step, we consider the hormonal cycle as a set of observations through time. We use a previously validated mechanistic model to generate realistic hormonal patterns, and experiment with Gaussian process regression to estimate their values over time. Specifically, we are interested in the feasibility of predicting menstrual cycle phases under varying learning conditions number of cycles used for training, hormonal measurement noise and sampling rates, and informed vs. agnostic sampling of hormonal measurements. Our results indicate that Gaussian processes can help model the female menstrual cycle. We discuss the implications of our experiments in the context of modeling the female menstrual cycle.
Topic Compositional Neural Language Model ; We propose a Topic Compositional Neural Language Model TCNLM, a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a MixtureofExperts MoE language model, where each expert corresponding to one topic is a recurrent neural network RNN that accounts for learning the local structure of a word sequence. In order to train the MoE model efficiently, a matrix factorization method is applied, by extending each weight matrix of the RNN to be an ensemble of topicdependent weight matrices. The degree to which each member of the ensemble is used is tied to the documentdependent probability of the corresponding topics. Experimental results on several corpora show that the proposed approach outperforms both a pure RNNbased model and other topicguided language models. Further, our model yields sensible topics, and also has the capacity to generate meaningful sentences conditioned on given topics.
Moment Approximations and Model Cascades for Shallow Flow ; Shallow flow models are used for a large number of applications including weather forecasting, open channel hydraulics and simulationbased natural hazard assessment. In these applications the shallowness of the process motivates depthaveraging. While the shallow flow formulation is advantageous in terms of computational efficiency, it also comes at the price of losing vertical information such as the flow's velocity profile. This gives rise to a model error, which limits the shallow flow model's predictive power and is often not explicitly quantifiable. We propose the use of vertical moments to overcome this problem. The shallow moment approximation preserves information on the vertical flow structure while still making use of the simplifying framework of depthaveraging. In this article, we derive a generic shallow flow moment system of arbitrary order starting from a set of balance laws, which has been reduced by scaling arguments. The derivation is based on a fully vertically resolved reference model with the vertical coordinate mapped onto the unit interval. We specify the shallow flow moment hierarchy for kinematic and Newtonian flow conditions and present 1D numerical results for shallow moment systems up to third order. Finally, we assess their performance with respect to both the standard shallow flow equations as well as with respect to the vertically resolved reference model. Our results show that depending on the parameter regime, e.g. friction and slip, shallow moment approximations significantly reduce the model error in shallow flow regimes and have a lot of potential to increase the predictive power of shallow flow models, while keeping them computationally cost efficient.
Exploring the constraints on cosmological models with CosmoEJS ; We introduce new CosmoEJS modules to improve the investigation of the consequences of constraints on the parameter values of cosmological models. We use CosmoMC to fit dark energy models and modified gravity models to recent data from the cosmic microwave background measurements of the Planck satellite, baryon acoustic oscillations, supernovae type Ia, Hubble Parameter Hz measurements, and redshift space distortions. While the results are in agreement with previous constraints for these models, here, we add an investigation into the statistical fits with CosmoEJS, an interactive Java package of simulations that allow the user to explore the ramifications of choosing various values for the cosmological parameters of a particular model. We visually inspect the plots of the simulated theoretical values for comparisons to the observational values, calculate derived cosmological values, and finally plot the expansion history of cosmological models. These new simulations now include modified gravity cosmological models as well as observations of the growth of structures of galaxies for a more accurate description of the universe's dynamics. The latest version of CosmoEJS is available from httpwww.compadre.orgospitemsdetail.cfmID12406.
Constraints on brane inflation after Planck 2015 Impacts of the latest local measurement of the Hubble constant ; We investigate the observational constraints on three typical brane inflation models by considering the latest local measurement of the Hubble constant in the global fit. We also employ other observational data, including the Planck 2015 CMB data, the BICEP2Keck Array Bmode data, and the baryon acoustic oscillations data, in our analysis. Previous studies have shown that the addition of the latest local H0 measurement favors a larger spectral index, and can exert a significant influence on the model selection of inflation. In this work, we investigate its impacts on the status of brane inflation models. We find that, when the direct H0 measurement is considered, the prototype model of brane inflation is still in good agreement with the current observational data within the 2sigma level range. For the KKLMMT model, the consideration of the H0 measurement allows the range of the parameter beta to be amplified to cal O102, which slightly alleviates the finetuning problem. For the IR DBI model, the addition of the H0 measurement does not provide a better fit. These results show that the consideration of the new H0 prior can exert a considerable influence on the brane inflation models. At last, we show that, when beta lesssim 1.1, the equilateral nonGaussianity in the IR DBI inflation model is compatible with the current CMB data at the 1sigma level.
Panel data analysis via mechanistic models ; Panel data, also known as longitudinal data, consist of a collection of time series. Each time series, which could itself be multivariate, comprises a sequence of measurements taken on a distinct unit. Mechanistic modeling involves writing down scientifically motivated equations describing the collection of dynamic systems giving rise to the observations on each unit. A defining characteristic of panel systems is that the dynamic interaction between units should be negligible. Panel models therefore consist of a collection of independent stochastic processes, generally linked through shared parameters while also having unitspecific parameters. To give the scientist flexibility in model specification, we are motivated to develop a framework for inference on panel data permitting the consideration of arbitrary nonlinear, partially observed panel models. We build on iterated filtering techniques that provide likelihoodbased inference on nonlinear partially observed Markov process models for time series data. Our methodology depends on the latent Markov process only through simulation; this plugandplay property ensures applicability to a large class of models. We demonstrate our methodology on a toy example and two epidemiological case studies. We address inferential and computational issues arising due to the combination of model complexity and dataset size.
Composite Behavioral Modeling for Identity Theft Detection in Online Social Networks ; In this work, we aim at building a bridge from poor behavioral data to an effective, quickresponse, and robust behavior model for online identity theft detection. We concentrate on this issue in online social networks OSNs where users usually have composite behavioral records, consisting of multidimensional lowquality data, e.g., offline checkins and online user generated content UGC. As an insightful result, we find that there is a complementary effect among different dimensions of records for modeling users' behavioral patterns. To deeply exploit such a complementary effect, we propose a joint model to capture both online and offline features of a user's composite behavior. We evaluate the proposed joint model by comparing with some typical models on two realworld datasets Foursquare and Yelp. In the widelyused setting of theft simulation simulating thefts via behavioral replacement, the experimental results show that our model outperforms the existing ones, with the AUC values 0.956 in Foursquare and 0.947 in Yelp, respectively. Particularly, the recall True Positive Rate can reach up to 65.3 in Foursquare and 72.2 in Yelp with the corresponding disturbance rate False Positive Rate below 1. It is worth mentioning that these performances can be achieved by examining only one composite behavior visiting a place and posting a tip online simultaneously per authentication, which guarantees the low response latency of our method. This study would give the cybersecurity community new insights into whether and how a realtime online identity authentication can be improved via modeling users' composite behavioral patterns.
Continuous Space Reordering Models for Phrasebased MT ; Bilingual sequence models improve phrasebased translation and reordering by overcoming phrasal independence assumption and handling long range reordering. However, due to data sparsity, these models often fall back to very small context sizes. This problem has been previously addressed by learning sequences over generalized representations such as POS tags or word clusters. In this paper, we explore an alternative based on neural network models. More concretely we train neuralized versions of lexicalized reordering and the operation sequence models using feedforward neural network. Our results show improvements of up to 0.6 and 0.5 BLEU points on top of the baseline GermanEnglish and EnglishGerman systems. We also observed improvements compared to the systems that used POS tags and word clusters to train these models. Because we modify the bilingual corpus to integrate reordering operations, this allows us to also train a sequencetosequence neural MT model having explicit reordering triggers. Our motivation was to directly enable reordering information in the encoderdecoder framework, which otherwise relies solely on the attention model to handle long range reordering. We tried both coarser and finegrained reordering operations. However, these experiments did not yield any improvements over the baseline Neural MT systems.
Fast Decoding in Sequence Models using Discrete Latent Variables ; Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain stateoftheart results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. Inspired by arxiv1711.00937, we present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model endtoend on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed nonautoregressive translation models.
Gaian bottlenecks and planetary habitability maintained by evolving model biospheres The ExoGaia model ; The search for habitable exoplanets inspires the question how do habitable planets form Planet habitability models traditionally focus on abiotic processes and neglect a biotic response to changing conditions on an inhabited planet. The Gaia hypothesis postulates that life influences the Earth's feedback mechanisms to form a selfregulating system, and hence that life can maintain habitable conditions on its host planet. If life has a strong influence, it will have a role in determining a planet's habitability over time. We present the ExoGaia model a model of simple 'planets' host to evolving microbial biospheres. Microbes interact with their host planet via consumption and excretion of atmospheric chemicals. Model planets orbit a 'star' which provides incoming radiation, and atmospheric chemicals have either an albedo, or a heattrapping property. Planetary temperatures can therefore be altered by microbes via their metabolisms. We seed multiple model planets with life while their atmospheres are still forming and find that the microbial biospheres are, under suitable conditions, generally able to prevent the host planets from reaching inhospitable temperatures, as would happen on a lifeless planet. We find that the underlying geochemistry plays a strong role in determining longterm habitability prospects of a planet. We find five distinct classes of model planets, including clear examples of 'Gaian bottlenecks' a phenomenon whereby life either rapidly goes extinct leaving an inhospitable planet, or survives indefinitely maintaining planetary habitability. These results suggest that life might play a crucial role in determining the longterm habitability of planets.
DataDriven Sensitivity Indices for Models With Dependent Inputs Using the Polynomial Chaos Expansion ; Uncertainties exist in both physicsbased and datadriven models. Variancebased sensitivity analysis characterizes how the variance of a model output is propagated from the model inputs. The Sobol index is one of the most widely used sensitivity indices for models with independent inputs. For models with dependent inputs, different approaches have been explored to obtain sensitivity indices in the literature. Typical approaches are based on procedures of transforming the dependent inputs into independent inputs. However, such transformation requires additional information about the inputs, such as the dependency structure or the conditional probability density functions. In this paper, datadriven sensitivity indices are proposed for models with dependent inputs. We first construct ordered partitions of linearly independent polynomials of the inputs. The modified GramSchmidt algorithm is then applied to the ordered partitions to generate orthogonal polynomials with respect to the empirical measure based on observed data of model inputs and outputs. Using the polynomial chaos expansion with the orthogonal polynomials, we obtain the proposed datadriven sensitivity indices. The sensitivity indices provide intuitive interpretations of how the dependent inputs affect the variance of the output without a priori knowledge on the dependence structure of the inputs. Three numerical examples are used to validate the proposed approach.
Modelbased Exception Mining for ObjectRelational Data ; This paper is based on a previous publication 29. Our work extends exception mining and outlier detection to the case of objectrelational data. Objectrelational data represent a complex heterogeneous network 12, which comprises objects of different types, links among these objects, also of different types, and attributes of these links. This special structure prohibits a direct vectorial data representation. We follow the wellestablished Exceptional Model Mining framework, which leverages machine learning models for exception mining A object is exceptional to the extent that a model learned for the object data differs from a model learned for the general population. Exceptional objects can be viewed as outliers. We apply state oftheart probabilistic modelling techniques for objectrelational data that construct a graphical model Bayesian network, which compactly represents probabilistic associations in the data. A new metric, derived from the learned objectrelational model, quantifies the extent to which the individual association pattern of a potential outlier deviates from that of the whole population. The metric is based on the likelihood ratio of two parameter vectors One that represents the population associations, and another that represents the individual associations. Our method is validated on synthetic datasets and on realworld data sets about soccer matches and movies. Compared to baseline methods, our novel transformed likelihood ratio achieved the best detection accuracy on all datasets.
Zeroshot Domain Adaptation without Domain Semantic Descriptors ; We propose a method to infer domainspecific models such as classifiers for unseen domains, from which no data are given in the training phase, without domain semantic descriptors. When training and test distributions are different, standard supervised learning methods perform poorly. Zeroshot domain adaptation attempts to alleviate this problem by inferring models that generalize well to unseen domains by using training data in multiple source domains. Existing methods use observed semantic descriptors characterizing domains such as time information to infer the domainspecific models for the unseen domains. However, it cannot always be assumed that such metadata can be used in realworld applications. The proposed method can infer appropriate domainspecific models without any semantic descriptors by introducing the concept of latent domain vectors, which are latent representations for the domains and are used for inferring the models. The latent domain vector for the unseen domain is inferred from the set of the feature vectors in the corresponding domain, which is given in the testing phase. The domainspecific models consist of two components the first is for extracting a representation of a feature vector to be predicted, and the second is for inferring model parameters given the latent domain vector. The posterior distributions of the latent domain vectors and the domainspecific models are parametrized by neural networks, and are optimized by maximizing the variational lower bound using stochastic gradient descent. The effectiveness of the proposed method was demonstrated through experiments using one regression and two classification tasks.
Exploring interacting holographic dark energy in a perturbed universe with parameterized postFriedmann approach ; The model of holographic dark energy in which dark energy interacts with dark matter is investigated in this paper. In particular, we consider the interacting holographic dark energy model in the context of a perturbed universe, which was never investigated in the literature. To avoid the largescale instability problem in the interacting dark energy cosmology, we employ the generalized version of the parameterized postFriedmann approach to treat the dark energy perturbations in the model. We use the current observational data to constrain the model. Since the cosmological perturbations are considered in the model, we can then employ the redshiftspace distortions RSD measurements to constrain the model, in addition to the use of the measurements of expansion history, which was either never done in the literature. We find that, for both the cases with Qbeta Hrhorm c and Qbeta H0rhorm c, the interacting holographic dark energy model is more favored by the current data, compared to the holographic dark energy model without interaction. It is also found that, with the help of the RSD data, a positive coupling beta can be detected at the 2.95sigma statistical significance for the case of Qbeta H0rhorm c.
Automated Data Slicing for Model ValidationA Big data AI Integration Approach ; As machine learning systems become democratized, it becomes increasingly important to help users easily debug their models. However, current data tools are still primitive when it comes to helping users trace model performance problems all the way to the data. We focus on the particular problem of slicing data to identify subsets of the validation data where the model performs poorly. This is an important problem in model validation because the overall model performance can fail to reflect that of the smaller subsets, and slicing allows users to analyze the model performance on a more granularlevel. Unlike general techniques e.g., clustering that can find arbitrary slices, our goal is to find interpretable slices which are easier to take action compared to arbitrary subsets that are problematic and large. We propose Slice Finder, which is an interactive framework for identifying such slices using statistical techniques. Applications include diagnosing model fairness and fraud detection, where identifying slices that are interpretable to humans is crucial. This research is part of a larger trend of Big data and Artificial Intelligence AI integration and opens many opportunities for new research.
A Bayesian method for combining theoretical and simulated covariance matrices for largescale structure surveys ; Accurate and precise covariance matrices will be important in enabling planned cosmological surveys to detect new physics. Standard methods imply either the need for many Nbody simulations in order to obtain an accurate estimate, or a precise theoretical model. We combine these approaches by constructing a likelihood function conditioned on simulated and theoretical covariances, consistently propagating noise from the finite number of simulations and uncertainty in the theoretical model itself using an informative InverseWishart prior. Unlike standard methods, our approach allows the required number of simulations to be less than the number of summary statistics. We recover the linear 'shrinkage' covariance estimator in the context of a Bayesian data model, and test our marginal likelihood on simulated mock power spectrum estimates. We conduct a thorough investigation into the impact of prior confidence in different choices of covariance models on the quality of model fits and parameter variances. In a simplified setting we find that the number of simulations required can be reduced if one is willing to accept a mild degradation in the quality of model fits, finding that even weakly informative priors can help to reduce the simulation requirements. We identify the correlation matrix of the summary statistics as a key quantity requiring careful modelling. Our approach can be easily generalized to any covariance model or set of summary statistics, and elucidates the role of hybrid estimators in cosmological inference.
Nested Covariance Determinants and Restricted Trek Separation in Gaussian Graphical Models ; Directed graphical models specify noisy functional relationships among a collection of random variables. In the Gaussian case, each such model corresponds to a semialgebraic set of positive definite covariance matrices. The set is given via parametrization, and much work has gone into obtaining an implicit description in terms of polynomial inequalities. Implicit descriptions shed light on problems such as parameter identification, model equivalence, and constraintbased statistical inference. For models given by directed acyclic graphs, which represent settings where all relevant variables are observed, there is a complete theory All conditional independence relations can be found via graphical dseparation and are sufficient for an implicit description. The situation is far more complicated, however, when some of the variables are hidden or in other words, unobserved or latent. We consider models associated to mixed graphs that capture the effects of hidden variables through correlated error terms. The notion of trek separation explains when the covariance matrix in such a model has submatrices of low rank and generalizes dseparation. However, in many cases, such as the infamous Verma graph, the polynomials defining the graphical model are not determinantal, and hence cannot be explained by dseparation or trekseparation. In this paper, we show that these constraints often correspond to the vanishing of nested determinants and can be graphically explained by a notion of restricted trek separation.
Joint Representation and Truncated Inference Learning for Correlation Filter based Tracking ; Correlation filter CF based trackers generally include two modules, i.e., feature representation and online model adaptation. In existing offline deep learning models for CF trackers, the model adaptation usually is either abandoned or has closedform solution to make it feasible to learn deep representation in an endtoend manner. However, such solutions fail to exploit the advances in CF models, and cannot achieve competitive accuracy in comparison with the stateoftheart CF trackers. In this paper, we investigate the joint learning of deep representation and model adaptation, where an updater network is introduced for better tracking on future frame by taking current frame representation, tracking result, and last CF tracker as input. By modeling the representor as convolutional neural network CNN, we truncate the alternating direction method of multipliers ADMM and interpret it as a deep network of updater, resulting in our model for learning representation and truncated inference RTINet. Experiments demonstrate that our RTINet tracker achieves favorable tracking accuracy against the stateoftheart trackers and its rapid version can run at a realtime speed of 24 fps. The code and pretrained models will be publicly available at httpsgithub.comtourmaline612RTINet.
Kernel Density EstimationBased Markov Models with Hidden State ; We consider Markov models of stochastic processes where the nextstep conditional distribution is defined by a kernel density estimator KDE, similar to Markov forecast densities and certain timeseries bootstrap schemes. The KDE Markov models KDEMMs we discuss are nonlinear, nonparametric, fully probabilistic representations of stationary processes, based on techniques with strong asymptotic consistency properties. The models generate new data by concatenating points from the training data sequences in a contextsensitive manner, together with some additive driving noise. We present novel EMtype maximumlikelihood algorithms for datadriven bandwidth selection in KDEMMs. Additionally, we augment the KDEMMs with a hidden state, yielding a new model class, KDEHMMs. The added state variable captures nonMarkovian long memory and signal structure e.g., slow oscillations, complementing the shortrange dependences described by the Markov process. The resulting joint Markov and hiddenMarkov structure is appealing for modelling complex realworld processes such as speech signals. We present guaranteedascent EMupdate equations for model parameters in the case of Gaussian kernels, as well as relaxed update formulas that greatly accelerate training in practice. Experiments demonstrate increased heldout set probability for KDEHMMs on several challenging natural and synthetic data series, compared to traditional techniques such as autoregressive models, HMMs, and their combinations.
The Efficiency of Geometric Samplers for Exoplanet Transit Timing Variation Models ; Transit timing variations TTVs are a valuable tool to determine the masses and orbits of transiting planets in multiplanet systems. TTVs can be readily modeled given knowledge of the interacting planets' orbital configurations and planetstar mass ratios, but such models are highly nonlinear and difficult to invert. Markov chain Monte Carlo MCMC methods are often used to explore the posterior distribution for model parameters, but, due to the high correlations between parameters, nonlinearity, and potential multimodality in the posterior, many samplers perform very inefficiently. Therefore, we assess the performance of several MCMC samplers that use varying degrees of geometric information about the target distribution. We generate synthetic datasets from multiple models, including the TTVFaster model and a simple sinusoidal model, and test the efficiencies of various MCMC samplers. We find that sampling efficiency can be greatly improved for all models by sampling from a parameter space transformed using an estimate of the covariance and means of the target distribution. No one sampler performs the best for all datasets, but several samplers, such as Differential Evolution Monte Carlo and Geometric adaptive Monte Carlo, have consistently efficient performance. For datasets with near Gaussian posteriors, Hamiltonian Monte Carlo samplers with 2 or 3 leapfrog steps obtained the highest efficiencies. Based on differences in effective sample sizes per time, we show that the right choice of sampler can improve sampling efficiencies by several orders of magnitude.
Network models in neuroscience ; From interacting cellular components to networks of neurons and neural systems, interconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from the development of new mathematical tools that can relate a system's architecture to its dynamics and function. These tools, which are known collectively as network science, have been used with increasing success to build models of neural systems across spatial scales and species. Here we discuss the nature of network models in neuroscience. We begin with a review of model theory from a philosophical perspective to inform our view of networks as models of complex systems in general, and of the brain in particular. We then summarize the types of models that are frequently studied in network neuroscience along three primary dimensions from data representations to firstprinciples theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarsegrained approximations. We then consider ways to validate these models, focusing on approaches that perturb a system to probe its function. We close with a description of important frontiers in the construction of network models and their relevance for understanding increasingly complex functions of neural systems.
Functional Adversarial Attacks ; We propose functional adversarial attacks, a novel class of threat models for crafting adversarial examples to fool machine learning models. Unlike a standard ellpball threat model, a functional adversarial threat model allows only a single function to be used to perturb input features to produce an adversarial example. For example, a functional adversarial attack applied on colors of an image can change all red pixels simultaneously to light red. Such global uniform changes in images can be less perceptible than perturbing pixels of the image individually. For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments. We show that functional threat models can be combined with existing additive ellp threat models to generate stronger threat models that allow both small, individual perturbations and large, uniform changes to an input. Moreover, we prove that such combinations encompass perturbations that would not be allowed in either constituent threat model. In practice, ReColorAdv can significantly reduce the accuracy of a ResNet32 trained on CIFAR10. Furthermore, to the best of our knowledge, combining ReColorAdv with other attacks leads to the strongest existing attack even after adversarial training. An implementation of ReColorAdv is available at httpsgithub.comcassidylaidlawReColorAdv .