text
stringlengths
62
2.94k
An ImageBased Fluid Surface Pattern Model ; This work aims at generating a model of the ocean surface and its dynamics from one or more video cameras. The idea is to model wave patterns from video as a first step towards a larger system of photogrammetric monitoring of marine conditions for use in offshore oil drilling platforms. The first part of the proposed approach consists in reducing the dimensionality of sensor data made up of the many pixels of each frame of the input video streams. This enables finding a concise number of most relevant parameters to model the temporal dataset, yielding an efficient datadriven model of the evolution of the observed surface. The second part proposes stochastic modeling to better capture the patterns embedded in the data. One can then draw samples from the final model, which are expected to simulate the behavior of previously observed flow, in order to determine conditions that match new observations. In this paper we focus on proposing and discussing the overall approach and on comparing two different techniques for dimensionality reduction in the first stage principal component analysis and diffusion maps. Work is underway on the second stage of constructing better stochastic models of fluid surface dynamics as proposed here.
Renormalization in supersymmetric models ; There are reasons to believe that the Standard Model is only an effective theory, with new Physics lying beyond it. Supersymmetric extensions are one possibility they address some of the Standard Model's shortcomings, such as the instability of the Higgs boson mass under radiative corrections. In this thesis, some topics related to the renormalization of supersymmetric models are analyzed. One of them is the automatic computation of the Lagrangian and the renormalization group equations of these models, which is a hard and errorprone process if carried out by hand. The generic renormalization group equations themselves are extended so as to include those models which have more than a single abelian gauge factor group. Such situations can occur in grand unified theories, for example. For a wide range of SO10inspired supersymmetric models, we also show that the renormalization group imprints on sparticle masses some information on the higher energies behavior of the models. Finally, in some cases these theories introduce charged lepton flavor violating interactions, which can change the ratio GammaleftKrightarrow enurightGammaleftKrightarrowmunuright. In light of experimental bounds on other observables, our analysis shows that any change over the Standard Model prediction must be smaller than the current experimental sensitivity on this observable.
A latent factor model with a mixture of sparse and dense factors to model gene expression data with confounding effects ; One important problem in genome science is to determine sets of coregulated genes based on measurements of gene expression levels across samples, where the quantification of expression levels includes substantial technical and biological noise. To address this problem, we developed a Bayesian sparse latent factor model that uses a three parameter beta prior to flexibly model shrinkage in the loading matrix. By applying three layers of shrinkage to the loading matrix global, factorspecific, and elementwise, this model has nonparametric properties in that it estimates the appropriate number of factors from the data. We added a twocomponent mixture to model each factor loading as being generated from either a sparse or a dense mixture component; this allows dense factors that capture confounding noise, and sparse factors that capture local gene interactions. We developed two statistics to quantify the stability of the recovered matrices for both sparse and dense matrices. We tested our model on simulated data and found that we successfully recovered the true latent structure as compared to related models. We applied our model to a large gene expression study and found that we recovered known covariates and small groups of coregulated genes. We validated these gene subsets by testing for associations between genotype data and these latent factors, and we found a substantial number of biologically important genetic regulators for the recovered gene subsets.
Modeling stochastic phenotype switching and bethedging in bacteria stochastic nonlinear dynamics and critical state identification ; Fluctuating environments pose tremendous challenges to bacterial populations. It is observed in numerous bacterial species that individual cells can stochastically switch among multiple phenotypes for the population to survive in rapidly changing environments. This kind of phenotypic heterogeneity with stochastic phenotype switching is generally understood to be an adaptive bethedging strategy. Mathematical models are essential to gain a deeper insight into the principle behind bethedging and the pattern behind experimental data. Traditional deterministic models cannot provide a correct description of stochastic phenotype switching and bethedging, and traditional Markov chain models at the cellular level fail to explain their underlying molecular mechanisms. In this paper, we propose a nonlinear stochastic model of multistable bacterial systems at the molecular level. It turns out that our model not only provides a clear description of stochastic phenotype switching and bethedging within isogenic bacterial populations, but also provides a deeper insight into the analysis of multidimensional experimental data. Moreover, we use some deep mathematical theories to show that our stochastic model and traditional Markov chain models are essentially consistent and reflect the dynamic behavior of the bacterial system at two different time scales. In addition, we provide a quantitative characterization of the critical state of multistable bacterial systems and develop an effective datadriven method to identify the critical state without resorting to specific mathematical models.
First experimental constraints on the disformally coupled Galileon model ; The Galileon model is a modified gravity model that can explain the latetime accelerated expansion of the Universe. In a previous work, we derived experimental constraints on the Galileon model with no explicit coupling to matter and showed that this model agrees with the most recent cosmological data. In the context of braneworld constructions or massive gravity, the Galileon model exhibits a disformal coupling to matter, which we study in this paper. After comparing our constraints on the uncoupled model with recent studies, we extend the analysis framework to the disformally coupled Galileon model and derive the first experimental constraints on that coupling, using precise measurements of cosmological distances and the growth rate of cosmic structures. In the uncoupled case, with updated data, we still observe a low tension between the constraints set by growth data and those from distances. In the disformally coupled Galileon model, we obtain better agreement with data and favour a nonzero disformal coupling to matter at the 2.5sigma level. This gives an interesting hint of the possible braneworld origin of Galileon theory.
A Loworder Model of Water Vapor, Clouds, and Thermal Emission for Tidally Locked Terrestrial Planets ; In the spirit of minimal modeling of complex systems, we develop an idealized twocolumn model to investigate the climate of tidally locked terrestrial planets with Earthlike atmospheres in the habitable zone of Mdwarf stars. The model is able to approximate the fundamental features of the climate obtained from threedimensional 3D atmospheric general circulation model GCM simulations. One important reason for the twocolumn model's success is that it reproduces the high cloud albedo of the GCM simulations, which reduces the planet's temperature and delays the onset of a runaway greenhouse state. The twocolumn model also clearly illustrates a secondary mechanism for determining the climate the nightside acts as a radiator fin'' through which infrared energy can be lost to space easily. This radiator fin is maintained by a temperature inversion and dry air on the nightside, and plays a similar role to the subtropics on modern Earth. Since 1D radiativeconvective models cannot capture the effects of the cloud albedo and radiator fin, they are systematically biased towards a narrower habitable zone. We also show that cloud parameters are most important for determining the daynight thermal emission contrast in the twocolumn model, which decreases and eventually reverses as the stellar flux increases. This reversal is important because it could be detected by future extrasolar planet characterization missions, which would suggest that the planet has Earthlike water clouds and is potentially habitable.
Gridbased seismic modelling at high and low signaltonoise ratios HD 181420 and HD 175272 ; Context Recently, the CoRoT target HD 175272 F5V, which shows a weak signal of solarlike oscillations, was modelled by a differential asteroseismic analysis Ozel et al. 2013 relative to a seismically similar star, HD 181420 F2V, for which there is a clear signature of solarlike oscillations. The results provided by Ozel et al. 2013 indicate the possibility of HD 175272 having subsolar mass, while being of the order of 1000 K hotter than the Sun. This seems unphysical standard stellar evolution theory generally does not predict solarmetallicity stars of subsolar mass to be hotter than about 6000K and calls for a reanalysis of this star. Aims We aim to compare the performance of differential asteroseismic analysis with that of gridbased modelling. Methods We use two sets of stellar model grids and two gridfitting methods to model HD 175272 and HD 181420. Results We find that we are able to model both stars with parameters that are both mutually compatible and comparable with other modelling efforts. Hence, with modest spectroscopic and asteroseismic inputs, we obtain reasonable estimates of stellar parameters. In the case of HD 175272, the uncertainties of the stellar parameters from our gridbased modelling are smaller, and hence more physical, than those reported in the differential analysis. Conclusions Gridbased modelling provides more precise and more realistic results than obtained with differential seismology.
On deformations of AdSn x Sn supercosets ; We study the deformed AdS5 x S5 supercoset model of arXiv1309.5850 which depends on one parameter kappa and has classical quantum group symmetry. We confirm the conjecture that in the maximal deformation limit kappa infinity this model is Tdual to flipped double Wick rotation of the target space AdS5 x S5, i.e. dS5 x H5 space supported by an imaginary 5form flux. In the imaginary deformation limit, kappa i, the corresponding target space metric is of a ppwave type and thus the resulting lightcone gauge Smatrix becomes relativistically invariant. Omitting nonunitary contributions of imaginary WZ terms, we find that this treelevel Smatrix is equivalent to that of the generalized sineGordon model representing the Pohlmeyer reduction of the undeformed AdS5 x S5 superstring model. We also study in some detail similar deformations of the AdS3 x S3 and AdS2 x S2 supercosets. The bosonic part of the deformed AdS3 x S3 model happens to be equivalent to the symmetric case of the sum of the Fateev integrable deformation of the SL2 and SU2 principal chiral models, while in the AdS2 x S2 case the role of the Fateev model is played by the 2d sausage model. The kappa i limits are again directly related to the Pohlmeyer reductions of the corresponding AdSn x Sn supercosets 2,2 super sineGordon model and its complex sineGordon analog. We also discuss possible deformations of AdS3 x S3 with more than one parameter.
Monte Carlo Null Models for Genomic Data ; As increasingly complex hypothesistesting scenarios are considered in many scientific fields, analytic derivation of null distributions is often out of reach. To the rescue comes Monte Carlo testing, which may appear deceptively simple as long as you can sample test statistics under the null hypothesis, the pvalue is just the proportion of sampled test statistics that exceed the observed test statistic. Sampling test statistics is often simple once you have a Monte Carlo null model for your data, and defining some form of randomization procedure is also, in many cases, relatively straightforward. However, there may be several possible choices of a randomization null model for the data and no clearcut criteria for choosing among them. Obviously, different null models may lead to very different pvalues, and a very low pvalue may thus occur due to the inadequacy of the chosen null model. It is preferable to use assumptions about the underlying random data generation process to guide selection of a null model. In many cases, we may order the null models by increasing preservation of the data characteristics, and we argue in this paper that this ordering in most cases gives increasing pvalues, that is, lower significance. We denote this as the null complexity principle. The principle gives a better understanding of the different null models and may guide in the choice between the different models.
Selectron NLSP in Gauge Mediation ; We discuss gauge mediation models in which the smuon and the selectron are massdegenerate coNLSP, which we, for brevity, refer to as selectron NLSP. In these models, the stau, as well as the other superpartners, are parametrically heavier than the NLSP. We start by taking a bottomup perspective and investigate the conditions under which selectron NLSP spectra can be realized in the MSSM. We then give a complete characterization of gauge mediation models realizing such spectra at low energies. The splitting between the slepton families is induced radiatively by the usual hierarchies in the Standard Model Yukawa couplings and hence, no new sources of flavour misalignment are introduced. We construct explicit weakly coupled messenger models which give rise to selectron NLSP, while accommodating a 126 GeV MSSM Higgs mass, both within the framework of General Gauge Mediation and in extensions where direct couplings between the messengers and the Higgs fields are present. In the latter class of models, large Aterms and relatively light stops can be achieved. The collider signatures of these models typically involve multilepton final states. We discuss the relevant LHC bounds and provide examples of models where the decay of the NLSP selectron is prompt, displaced or longlived. The prompt case can be viewed as an ultraviolet completion of a simplified model recently considered by the CMS collaboration.
The ROMES method for statistical modeling of reducedordermodel error ; This work presents a technique for statistically modeling errors introduced by reducedorder models. The method employs Gaussianprocess regression to construct a mapping from a small number of computationally inexpensive error indicators' to a distribution over the true error. The variance of this distribution can be interpreted as the epistemic uncertainty introduced by the reducedorder model. To model normed errors, the method employs existing rigorous error bounds and residual norms as indicators; numerical experiments show that the method leads to a nearoptimal expected effectivity in contrast to typical error bounds. To model errors in general outputs, the method uses dualweighted residualswhich are amenable to uncertainty controlas indicators. Experiments illustrate that correcting the reducedordermodel output with this surrogate can improve prediction accuracy by an order of magnitude; this contrasts with existing multifidelity correction' approaches, which often fail for reducedorder models and suffer from the curse of dimensionality. The proposed error surrogates also lead to a notion of probabilistic rigor', i.e., the surrogate bounds the error with specified probability.
Influence Spread in Social Networks A Study via a Fluid Limit of the Linear Threshold Model ; Threshold based models have been widely used in characterizing collective behavior on social networks. An individual's threshold indicates the minimum level of influence that must be exerted, by other members of the population engaged in some activity, before the individual will join the activity. In this work, we begin with a homogeneous version of the Linear Threshold model proposed by Kempe et al. in the context of viral marketing, and generalize this model to arbitrary threshold distributions. We show that the evolution can be modeled as a discrete time Markov chain, and, by using a certain scaling, we obtain a fluid limit that provides an ordinary differential equation model o.d.e.. We find that the threshold distribution appears in the o.d.e. via its hazard rate function. We demonstrate the accuracy of the o.d.e. approximation and derive explicit expressions for the trajectory of influence under the uniform threshold distribution. Also, for an exponentially distributed threshold, we show that the fluid dynamics are equivalent to the wellknown SIR model in epidemiology. We also numerically study how other hazard functions obtained from the Weibull and loglogistic distributions provide qualitative different characteristics of the influence evolution, compared to traditional epidemic models, even in a homogeneous setting. We finally show how the model can be extended to a setting with multiple communities and conclude with possible future directions.
Interleaved Factorial NonHomogeneous Hidden Markov Models for Energy Disaggregation ; To reduce energy demand in households it is useful to know which electrical appliances are in use at what times. Monitoring individual appliances is costly and intrusive, whereas data on overall household electricity use is more easily obtained. In this paper, we consider the energy disaggregation problem where a household's electricity consumption is disaggregated into the component appliances. The factorial hidden Markov model FHMM is a natural model to fit this data. We enhance this generic model by introducing two constraints on the state sequence of the FHMM. The first is to use a nonhomogeneous Markov chain, modelling how appliance usage varies over the day, and the other is to enforce that at most one chain changes state at each time step. This yields a new model which we call the interleaved factorial nonhomogeneous hidden Markov model IFNHMM. We evaluated the ability of this model to perform disaggregation in an ultralow frequency setting, over a data set of 251 English households. In this new setting, the IFNHMM outperforms the FHMM in terms of recovering the energy used by the component appliances, due to that stronger constraints have been imposed on the states of the hidden Markov chains. Interestingly, we find that the variability in model performance across households is significant, underscoring the importance of using larger scale data in the disaggregation problem.
Hadron Production Model Developments and Benchmarking in the 0.7 12 GeV Energy Region ; Driven by the needs of the intensity frontier projects with their Megawatt beams, e.g., ESS, FAIR and Project X, and their experiments, the event generators of the MARS15 code have been recently improved. After thorough analysis and benchmarking against data, including the newest ones by the HARP collaboration, both the exclusive and inclusive particle production models were further developed in the crucial for the above projects but difficult from a theoretical standpoint projectile energy region of 0.7 to 12 GeV. At these energies, modelling of prompt particle production in nucleonnucleon and pionnucleon inelastic reactions is now based on a combination of phasespace and isobar models. Other reactions are still modeled in the framework of the QuarkGluon String Model. Pion, kaon and strange particle production and propagation in nuclear media are improved. For the alternative inclusive mode, experimental data on largeangle 20 degrees pion production in hadronnucleus interactions are parameterized in a broad energy range using a twosource model. It is mixedandmatched with the native MARS model that successfully describes lowangle pion production data. Predictions of both new models are in most cases in a good agreement with experimental data obtained at CERN, JINR, LANL, BNL and KEK.
Comprehensive and MacrospinBased Magnetic Tunnel Junction Spin Torque Oscillator Model Part I Analytical Model of the MTJ STO ; Magnetic tunnel junction MTJ spin torque oscillators STO have shown the potential to be used in a wide range of microwave and sensing applications. To evaluate potential uses of MTJ STO technology in various applications, an analytical model that can capture MTJ STO's characteristics, while enabling system and circuitlevel designs, is of great importance. An analytical model based on macrospin approximation is necessary for these designs since it allows implementation in hardware description languages. This paper presents a new macrospinbased, comprehensive and compact MTJ STO model, which can be used for various MTJ STOs to estimate the performance of MTJ STOs together with their applicationspecific integrated circuits. To adequately present the complete model, this paper is divided into two parts. In Part I, the analytical model is introduced and verified by comparing it against measured data of three different MTJ STOs, varying the angle and magnitude of the magnetic field, as well as the DC biasing current. The proposed analytical model is suitable for being implemented in VerilogA and used for efficient simulations at device, circuit and systemlevels. In Part II, the full VerilogA implementation of the analytical model with accurate phase noise generation is presented and verified by simulations.
Bayesian modeling of bacterial growth for multiple populations ; Bacterial growth models are commonly used for the prediction of microbial safety and the shelf life of perishable foods. Growth is affected by several environmental factors such as temperature, acidity level and salt concentration. In this study, we develop two models to describe bacterial growth for multiple populations under both equal and different environmental conditions. First, a semiparametric model based on the Gompertz equation is proposed. Assuming that the parameters of the Gompertz equation may vary in relation to the running conditions under which the experiment is performed, we use feedforward neural networks to model the influence of these environmental factors on the growth parameters. Second, we propose a more general model which does not assume any underlying parametric form for the growth function. Thus, we consider a neural network as a primary growth model which includes the influencing environmental factors as inputs to the network. One of the main disadvantages of neural networks models is that they are often very difficult to tune, which complicates fitting procedures. Here, we show that a simple Bayesian approach to fitting these models can be implemented via the software package WinBugs. Our approach is illustrated using real experimental Listeria monocytogenes growth data.
Exponential integrators for a Markov chain model of the fast sodium channel of cardiomyocytes ; The modern Markov chain models of ionic channels in excitable membranes are numerically stiff. The popular numerical methods for these models require very small time steps to ensure stability. Our objective is to formulate and test two methods addressing this issue, so that the timestep can be chosen based on accuracy rather than stability. Both proposed methods extend RushLarsen technique, which was originally developed to HogdkinHuxley type gate models. One method, Matrix RushLarsen MRL uses a matrix reformulation of the RushLarsen scheme, where the matrix exponentials are calculated using precomputed tables of eigenvalues and eigenvectors. The other, hybrid operator splitting HOS method exploits asymptotic properties of a particular Markov chain model, allowing explicit analytical expressions for the substeps. We test both methods on the Clancy and Rudy 2002 INa Markov chain model. With precomputed tables for functions of the transmembrane voltage, both methods are comparable to the forward Euler method in accuracy and computational cost, but allow longer time steps without numerical instability. We conclude that both methods are of practical interest. MRL requires more computations than HOS, but is formulated in general terms which can be readily extended to other Markov Chain channel models, whereas the utility of HOS depends on the asymptotic properties of a particular model. The significance of the methods is that they allow a considerable speedup of largescale computations of cardiac excitation models by increasing the time step, while maintaining acceptable accuracy and preserving numerical stability.
Revisiting NonProgressive Influence Models Scalable Influence Maximization ; While influence maximization in social networks has been studied extensively in computer science community for the last decade the focus has been on the progressive influence models, such as independent cascade IC and Linear threshold LT models, which cannot capture the reversibility of choices. In this paper, we present the Heat Conduction HC model which is a nonprogressive influence model with realworld interpretations. We show that HC unifies, generalizes, and extends the existing nonprogressive models, such as the Voter model 1 and nonprogressive LT 2. We then prove that selecting the optimal seed set of influential nodes is NPhard for HC but by establishing the submodularity of influence spread, we can tackle the influence maximization problem with a scalable and provably nearoptimal greedy algorithm. We are the first to present a scalable solution for influence maximization under nonprogressive LT model, as a special case of the HC model. In sharp contrast to the other greedy influence maximization methods, our fast and efficient C2GREEDY algorithm benefits from two analytically computable steps closedform computation for finding the influence spread as well as the greedy seed selection. Through extensive experiments on several large real and synthetic networks, we show that C2GREEDY outperforms the stateoftheart methods, in terms of both influence spread and scalability.
Enhanced Mixtures of Part Model for Human Pose Estimation ; Mixture of parts model has been successfully applied to 2D human pose estimation problem either as explicitly trained body part model or as latent variables for the whole human body model. Mixture of parts model usually utilize tree structure for representing relations between body parts. Tree structures facilitate training and referencing of the model but could not deal with double counting problems, which hinder its applications in 3D pose estimation. While most of work targeted to solve these problems tend to modify the tree models or the optimization target. We incorporate other cues from input features. For example, in surveillance environments, human silhouettes can be extracted relative easily although not flawlessly. In this condition, we can combine extracted human blobs with histogram of gradient feature, which is commonly used in mixture of parts model for training body part templates. The method can be easily extend to other candidate features under our generalized framework. We show 2D body part detection results on a public available dataset HumanEva dataset. Furthermore, a 2D to 3D pose estimator is trained with Gaussian process regression model and 2D body part detections from the proposed method is fed to the estimator, thus 3D poses are predictable given new 2D body part detections. We also show results of 3D pose estimation on HumanEva dataset.
Forecasting Turbulent Modes with Nonparametric Diffusion Models Learning from noisy data ; In this paper, we apply a recently developed nonparametric modeling approach, the diffusion forecast, to predict the timeevolution of Fourier modes of turbulent dynamical systems. While the diffusion forecasting method assumes the availability of a noisefree training data set observing the full state space of the dynamics, in real applications we often have only partial observations which are corrupted by noise. To alleviate these practical issues, following the theory of embedology, the diffusion model is built using the delayembedding coordinates of the data. We show that this delay embedding biases the geometry of the data in a way which extracts the most stable component of the dynamics and reduces the influence of independent additive observation noise. The resulting diffusion forecast model approximates the semigroup solutions of the generator of the underlying dynamics in the limit of large data and when the observation noise vanishes. As in any standard forecasting problem, the forecasting skill depends crucially on the accuracy of the initial conditions. We introduce a novel Bayesian method for filtering the discretetime noisy observations which works with the diffusion forecast to determine the forecast initial densities. Numerically, we compare this nonparametric approach with standard stochastic parametric models on a widerange of wellstudied turbulent modes, including the Lorenz96 model in weakly chaotic to fully turbulent regimes and the barotropic modes of a quasigeostrophic model with baroclinic instabilities. We show that when the only available data is the lowdimensional set of noisy modes that are being modeled, the diffusion forecast is indeed competitive to the perfect model.
Baryon chemical potential and inmedium properties of BPS skyrmions ; We continue the investigation of thermodynamical properties of the BPS Skyrme model. In particular, we analytically compute the baryon chemical potential both in the full field theory and in a meanfield approximation. In the full field theory case, we find that the baryon chemical potential is always exactly proportional to the baryon density, for arbitrary solutions. We further find that, in the meanfield approximation, the BPS Skyrme model approaches the Walecka model in the limit of high density their thermodynamical functions as well as the equation of state agree in this limit. This fact allows to read off some properties of the omegameson from the BPS Skyrme action, even though the latter model is entirely based on the pionic SU2 Skyrme field. On the other hand, at low densities, at the order of the usual nuclear matter density, the equations of state of the two models are no longer universal, such that a comparison depends on some model details. Still, also the BPS Skyrme model gives rise to nuclear saturation in this regime, leading, in fact, to an exact balance between repulsive and attractive forces. The perfect fluid aspects of the BPS Skyrme model, which, together with its BPS properties, form the base of our results, are shown to be in close formal analogy with the Eulerian formulation of relativistic fluid dynamics. Within this analogy, the BPS Skyrme model, in general, corresponds to a nonbarotropic perfect fluid.
Comparative Analysis of Dayside Reconnection Models in Global Magnetosphere Simulations ; We test and compare a number of existing models predicting the location of magnetic reconnection at Earth's dayside magnetopause for various solar wind conditions. We employ robust image processing techniques to determine the locations where each model predicts reconnection to occur. The predictions are then compared to the magnetic separators, the magnetic field lines separating different magnetic topologies. The predictions are tested in distinct highresolution simulations with interplanetary magnetic field IMF clock angles ranging from 30 to 165 degrees in global magnetohydrodynamic simulations using the threedimensional BlockAdaptive Tree Solarwind Roetype Upwind Scheme BATSRUS code with a uniform resistivity, although the described techniques can be generally applied to any selfconsistent magnetosphere code. Additional simulations are carried out to test location model dependence on IMF strength and dipole tilt. We find that most of the models match large portions of the magnetic separators when the IMF has a southward component, with the models saying reconnection occurs where the local reconnection rate and reconnection outflow speed are maximized performing best. When the IMF has a northward component, none of the models tested faithfully map the entire magnetic separator, but the maximum magnetic shear model is the best at mapping the separator in the cusp region where reconnection has been observed. Predictions for some models with northward IMF orientations improve after accounting for plasma flow shear parallel to the reconnecting components of the magnetic fields. Implications for observations are discussed.
Summary of the searches for squarks and gluinos using sqrts 8 TeV pp collisions with the ATLAS experiment at the LHC ; A summary is presented of ATLAS searches for gluinos and first and secondgeneration squarks in final states containing jets and missing transverse momentum, with or without leptons or bjets, in the sqrts 8 TeV data set collected at the Large Hadron Collider in 2012. This paper reports the results of new interpretations and statistical combinations of previously published analyses, as well as a new analysis. Since no significant excess of events over the Standard Model expectation is observed, the data are used to set limits in a variety of models. In all the considered simplified models that assume Rparity conservation, the limit on the gluino mass exceeds 1150 GeV at 95 confidence level, for an LSP mass smaller than 100 GeV. Furthermore, exclusion limits are set for lefthanded squarks in a phenomenological MSSM model, a minimal SupergravityConstrained MSSM model, Rparityviolation scenarios, a minimal gaugemediated supersymmetry breaking model, a natural gauge mediation model, a nonuniversal Higgs mass model with gaugino mediation and a minimal model of universal extra dimensions.
Entropy and Graph Based Modelling of Document Coherence using Discourse Entities An Application ; We present two novel models of document coherence and their application to information retrieval IR. Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our first model views text as a Markov process generating sequences of discourse entities entity ngrams; we use the entropy of these entity ngrams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau Strube 28 that represents text as a graph of discourse entities, linked by different relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate different aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that i our models perform on a par with two other wellknown models of text coherence even without any parameter tuning, and ii reranking retrieval results according to their coherence scores gives notable performance gains, confirming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking 5, 56.
3D cutcell modelling for highresolution atmospheric simulations ; Owing to the recent, rapid development of computer technology, the resolution of atmospheric numerical models has increased substantially. With the use of nextgeneration supercomputers, atmospheric simulations using horizontal grid intervals of O100 m or less will gain popularity. At such high resolution more of the steep gradients in mountainous terrain will be resolved, which may result in large truncation errors in those models using terrainfollowing coordinates. In this study, a new 3D Cartesian coordinate nonhydrostatic atmospheric model is developed. A cutcell representation of topography based on finitevolume discretization is combined with a cellmerging approach, in which small cutcells are merged with neighboring cells either vertically or horizontally. In addition, a blockstructured meshrefinement technique is introduced to achieve a variable resolution on the model grid with the finest resolution occurring close to the terrain surface. The model successfully reproduces a flow over a 3D bellshaped hill that shows a good agreement with the flow predicted by the linear theory. The ability of the model to simulate flows over steep terrain is demonstrated using a hemisphereshaped hill where the maximum slope angle is resolved at 71 degrees. The advantage of a locally refined grid around a 3D hill, with cutcells at the terrain surface, is also demonstrated using the hemisphereshaped hill. The model reproduces smooth mountain waves propagating over varying grid resolution without introducing large errors associated with the change of mesh resolution. At the same time, the model shows a good scalability on a locally refined grid with the use of OpenMP.
Asymptotic Expansion Homogenization of Discrete FineScale Models with Rotational Degrees of Freedom for the Simulation of QuasiBrittle Materials ; Discrete finescale models, in the form of either particle or lattice models, have been formulated successfully to simulate the behavior of quasibrittle materials whose mechanical behavior is inherently connected to fracture processes occurring in the internal heterogeneous structure. These models tend to be intensive from the computational point of view as they adopt an a priori discretization anchored to the major material heterogeneities e.g. grains in particulate materials and aggregate pieces in cementitious composites and this hampers their use in the numerical simulations of large systems. In this work, this problem is addressed by formulating a general multiple scale computational framework based on classical asymptotic analysis and that 1 is applicable to any discrete model with rotational degrees of freedom; and 2 gives rise to an equivalent Cosserat continuum. The developed theory is applied to the upscaling of the Lattice Discrete Particle Model LDPM, a recently formulated discrete model for concrete and other quasibrittle materials, and the properties of the homogenized model are analyzed thoroughly in both the elastic and inelastic regime. The analysis shows that the homogenized micropolar elastic properties are sizedependent, and they are functions of the RVE size and the size of the material heterogeneity. Furthermore, the analysis of the homogenized inelastic behavior highlights issues associated with the homogenization of finescale models featuring strainsoftening and the related damage localization. Finally, nonlinear simulations of the RVE behavior subject to curvature components causing bending and torsional effects demonstrates, contrarily to typical Cosserat formulations, a significant coupling between the homogenized stressstrain and couplecurvature constitutive equations.
Numerical models for AC loss calculation in largescale applications of HTS coated conductors ; Numerical models are powerful tools to predict the electromagnetic behavior of superconductors. In recent years, a variety of models have been successfully developed to simulate hightemperaturesuperconducting HTS coated conductor tapes. While the models work well for the simulation of individual tapes or relatively small assemblies, their direct applicability to devices involving hundreds or thousands of tapes, as for example coils used in electrical machines, is questionable. Indeed the simulation time and memory requirement can quickly become prohibitive. In this article, we develop and compare two different models for simulating realistic HTS devices composed of a large number of tapes 1 the homogenized model simulates the coil using an equivalent anisotropic homogeneous bulk with specifically developed current constraints to account for the fact that each turn carries the same current; 2 the multiscale model parallelizes and reduces the computational problem by simulating only several individual tapes at significant positions of the coil's crosssection using appropriate boundary conditions to account for the field generated by the neighboring turns. Both methods are used to simulate a coil made of 2000 tapes, and compared against the widely used Hformulation finite element model that includes all the tapes. Both approaches allow speedingup simulations of large number of HTS tapes by 13 orders of magnitudes, while keeping a good accuracy of the results. Such models could be used to design and optimize largescale HTS devices.
Approximate Fisher Kernels of noniid Image Models for Image Categorization ; The bagofwords BoW model treats images as sets of local descriptors and represents them by visual word histograms. The Fisher vector FV representation extends BoW, by considering the first and second order statistics of local descriptors. In both representations local descriptors are assumed to be identically and independently distributed iid, which is a poor assumption from a modeling perspective. It has been experimentally observed that the performance of BoW and FV representations can be improved by employing discounting transformations such as power normalization. In this paper, we introduce noniid models by treating the model parameters as latent variables which are integrated out, rendering all local regions dependent. Using the Fisher kernel principle we encode an image by the gradient of the data loglikelihood w.r.t. the model hyperparameters. Our models naturally generate discounting effects in the representations; suggesting that such transformations have proven successful because they closely correspond to the representations obtained for noniid models. To enable tractable computation, we rely on variational freeenergy bounds to learn the hyperparameters and to compute approximate Fisher kernels. Our experimental evaluation results validate that our models lead to performance improvements comparable to using power normalization, as employed in stateoftheart feature aggregation methods.
Texture Modelling with Nested Highorder MarkovGibbs Random Fields ; Currently, MarkovGibbs random field MGRF image models which include highorder interactions are almost always built by modelling responses of a stack of local linear filters. Actual interaction structure is specified implicitly by the filter coefficients. In contrast, we learn an explicit highorder MGRF structure by considering the learning process in terms of general exponential family distributions nested over base models, so that potentials added later can build on previous ones. We relatively rapidly add new features by skipping over the costly optimisation of parameters. We introduce the use of local binary patterns as features in MGRF texture models, and generalise them by learning offsets to the surrounding pixels. These prove effective as highorder features, and are fast to compute. Several schemes for selecting highorder features by composition or search of a small subclass are compared. Additionally we present a simple modification of the maximum likelihood as a texture modellingspecific objective function which aims to improve generalisation by local windowing of statistics. The proposed method was experimentally evaluated by learning highorder MGRF models for a broad selection of complex textures and then performing texture synthesis, and succeeded on much of the continuum from stochastic through irregularly structured to nearregular textures. Learning interaction structure is very beneficial for textures with largescale structure, although those with complex irregular structure still provide difficulties. The texture models were also quantitatively evaluated on two tasks and found to be competitive with other works grading of synthesised textures by a panel of observers; and comparison against several recent MGRF models by evaluation on a constrained inpainting task.
Impacts of dark energy on weighing neutrinos after Planck 2015 ; We investigate how dark energy properties impact the cosmological limits on the total mass of active neutrinos. We consider two typical, simple dark energy models that have only one more additional parameter than LambdaCDM, i.e., the wCDM model and the holographic dark energy HDE model, as examples, to make an analysis. In the cosmological fits, we use the Planck 2015 temperature and polarization data, in combination with other lowredshift observations, including the baryon acoustic oscillations, type Ia supernovae, and Hubble constant measurement, as well as the Planck lensing measurements. We find that, once dynamical dark energy is considered, the degeneracy between sum mnu and H0 will be changed, i.e., in the LambdaCDM model, sum mnu is anticorrelated with H0, but in the wCDM and HDE models, sum mnu becomes positively correlated with H0. Compared to LambdaCDM, in the wCDM model the limit on sum mnu becomes much looser, but in the HDE model the limit becomes much tighter. In the HDE model, we obtain sum mnu0.113 eV 95 CL with the combined data sets, which is perhaps the most stringent upper limit by far on neutrino mass. Thus, our result in the HDE model is nearly ready to diagnose the neutrino mass hierarchy with the current cosmological observations.
Time Scaling Relations for Step Bunches from Models with StepStep Attractions B1Type Models ; The step bunching instability is studied in three models of step motion defined in terms of ordinary differential equations ODE. The source of instability in these models is stepstep attraction, it is opposed by stepstep repulsion and the developing surface patterns reflect the balance between the two. The first model, TE2, is a generalization of the seminal model of Tersoff et al. 1995. The second one, LW2, is obtained from the model of Liu and Weeks 1998 using the repulsions term to construct the attractions one with retained possibility to change the parameters in the two independently. The third model, MM2, is a minimal one constructed ad hoc and in this article it plays a central role. New scheme for scaling the ODE in vicinal studies is applied towards deciphering the prefactors in the timescaling relations. In all these models the patterned surface is selfsimilar only one length scale is necessary to describe its evolution hence B1type. The bunches form finite angles with the terraces. Integrating numerically the equations for step motion and changing systematically the parameters we obtain the overall dependence of timescaling exponent beta on the power of stepstep attractions p as beta 13p for MM2 and hypothesize based on restricted set of data that it is beta 15p for LW2 and TE2.
Clumpy tori illuminated by the anisotropic radiation of accretion discs in active galactic nuclei ; In this paper, we try to explain the observed correlation between the covering factor CF of hot dust and the properties of active galactic nuclei AGNs, e.g., the bolometric luminosity Lrmbol and black hole mass MrmBH. Combining the possible dust distribution in the torus, the angular dependence of the radiation of the accretion disc, and the relation between the critical angle of torus and the Eddington ratio, there are eight possible models investigated in our work. We fit the observed CF with these models to determine the parameters of them. As a result, clumpy torus models can generally explain the observed correlations of tori, while the smooth models fail to produce the required CFs. However, there is still significant scatter even for the bestfitting model, which is the combination of a clumpy torus illuminated by the anisotropic radiation of accretion disc in an AGN. Although some of the observed scatter is due to the uncertainties in measuring Lrmbol and MrmBH, other factors are required in more realistic model. The models examined in this paper are not necessary to be the physical model of tori. However, the reasonable assumptions selected during this process should be helpful in constructing physical models of tori.
Mode jumping MCMC for Bayesian variable selection in GLMM ; Generalized linear mixed models GLMM are used for inference and prediction in a wide range of different applications providing a powerful scientific tool. An increasing number of sources of data are becoming available, introducing a variety of candidate explanatory variables for these models. Selection of an optimal combination of variables is thus becoming crucial. In a Bayesian setting, the posterior distribution of the models, based on the observed data, can be viewed as a relevant measure for the model evidence. The number of possible models increases exponentially in the number of candidate variables. Moreover, the space of models has numerous local extrema in terms of posterior model probabilities. To resolve these issues a novel MCMC algorithm for the search through the model space via efficient mode jumping for GLMMs is introduced. The algorithm is based on that marginal likelihoods can be efficiently calculated within each model. It is recommended that either exact expressions or precise approximations of marginal likelihoods are applied. The suggested algorithm is applied to simulated data, the famous U.S. crime data, protein activity data and epigenetic data and is compared to several existing approaches.
Modelling discrete valued cross sectional time series with observation driven models ; This paper develops computationally feasible methods for estimating random effects models in the context of regression modelling of multiple independent time series of discrete valued counts in which there is serial dependence. Given covariates, random effects and process history, the observed responses at each time in each series are independent and have an exponential family distribution. We develop maximum likelihood estimation of the mixed effects model using an observation driven generalized linear autoregressive moving average specification for the serial dependence in each series. The paper presents an easily implementable approach which uses existing single time series methods to handle the serial dependence structure in combination with adaptive Gaussian quadrature to approximate the integrals over the regression random effects required for the likelihood and its derivatives. The models and methods presented allow extension of existing mixed model procedures for count data by incorporating serial dependence which can differ in form and strength across the individual series. The structure of the model has some similarities to longitudinal data transition models with random effects. However, in contrast to that setting, where there are many cases and few to moderate observations per case, the time series setting has many observations per series and a few to moderate number of cross sectional time series. The method is illustrated on time series of binary responses to musical features obtained from a panel of listeners.
A selfconsistent model for the evolution of the gas produced in the debris disc of Pictoris ; This paper presents a selfconsistent model for the evolution of gas produced in the debris disc of beta Pictoris. Our model proposes that atomic carbon and oxygen are created from the photodissociation of CO, which is itself released from volatilerich bodies in the debris disc due to graingrain collisions or photodesorption. While the CO lasts less than one orbit, the atomic gas evolves by viscous spreading resulting in an accretion disc inside the parent belt and a decretion disc outside. The temperature, ionisation fraction and population levels of carbon and oxygen are followed with the photodissociation region model Cloudy, which is coupled to a dynamical viscous alpha model. We present new gas observations of beta Pic, of C I observed with APEX and O I observed with Herschel, and show that these along with published C II and CO observations can all be explained with this new model. Our model requires a viscosity alpha 0.1, similar to that found in sufficiently ionised discs of other astronomical objects; we propose that the magnetorotational instability is at play in this highly ionised and dilute medium. This new model can be tested from its predictions for high resolution ALMA observations of C I. We also constrain the water content of the planetesimals in beta Pic. The scenario proposed here might be at play in all debris discs and this model could be used more generally on all discs with C, O or CO detections.
Onboard monitoring of 2D spatiallyresolved temperatures in cylindrical lithiumion batteries Part I. Loworder thermal modelling ; Estimating the temperature distribution within Liion batteries during operation is critical for safety and control purposes. Although existing controloriented thermal models such as thermal equivalent circuits TEC are computationally efficient, they only predict average temperatures, and are unable to predict the spatially resolved temperature distribution throughout the cell. We present a loworder 2D thermal model of a cylindrical battery based on a Chebyshev spectralGalerkin SG method, capable of predicting the full temperature distribution with a similar efficiency to a TEC. The model accounts for transient heat generation, anisotropic heat conduction, and nonhomogeneous convection boundary conditions. The accuracy of the model is validated through comparison with finite element simulations, which show that the 2D temperature field r, z of a large format 64 mm diameter cell can be accurately modelled with as few as 4 states. Furthermore, the performance of the model for a range of Biot numbers is investigated via frequency analysis. For larger cells or highly transient thermal dynamics, the model order can be increased for improved accuracy. The incorporation of this model in a state estimation scheme with experimental validation against thermocouple measurements is presented in the companion contribution Part II.
Bayesian Estimation and Comparison of Moment Condition Models ; In this paper we consider the problem of inference in statistical models characterized by moment restrictions by casting the problem within the Exponentially Tilted Empirical Likelihood ETEL framework. Because the ETEL function has a well defined probabilistic interpretation and plays the role of a nonparametric likelihood, a fully Bayesian semiparametric framework can be developed. We establish a number of powerful results surrounding the Bayesian ETEL framework in such models. One major concern driving our work is the possibility of misspecification. To accommodate this possibility, we show how the moment conditions can be reexpressed in terms of additional nuisance parameters and that, even under misspecification, the Bayesian ETEL posterior distribution satisfies a Bernsteinvon Mises result. A second key contribution of the paper is the development of a framework based on marginal likelihoods and Bayes factors to compare models defined by different moment conditions. Computation of the marginal likelihoods is by the method of Chib 1995 as extended to MetropolisHastings samplers in Chib and Jeliazkov 2001. We establish the model selection consistency of the marginal likelihood and show that the marginal likelihood favors the model with the minimum number of parameters and the maximum number of valid moment restrictions. When the models are misspecified, the marginal likelihood model selection procedure selects the model that is closer to the unknown true data generating process in terms of the KullbackLeibler divergence. The ideas and results in this paper provide a further broadening of the theoretical underpinning and value of the Bayesian ETEL framework with likely farreaching practical consequences. The discussion is illuminated through several examples.
Discovery of Latent Factors in Highdimensional Data Using Tensor Methods ; Unsupervised learning aims at the discovery of hidden structure that drives the observations in the real world. It is essential for success in modern machine learning. Latent variable models are versatile in unsupervised learning and have applications in almost every domain. Training latent variable models is challenging due to the nonconvexity of the likelihood objective. An alternative method is based on the spectral decomposition of low order moment tensors. This versatile framework is guaranteed to estimate the correct model consistently. My thesis spans both theoretical analysis of tensor decomposition framework and practical implementation of various applications. This thesis presents theoretical results on convergence to globally optimal solution of tensor decomposition using the stochastic gradient descent, despite nonconvexity of the objective. This is the first work that gives global convergence guarantees for the stochastic gradient descent on nonconvex functions with exponentially many local minima and saddle points. This thesis also presents largescale deployment of spectral methods carried out on various platforms. Dimensionality reduction techniques such as random projection are incorporated for a highly parallel and scalable tensor decomposition algorithm. We obtain a gain in both accuracies and in running times by several orders of magnitude compared to the stateofart variational methods. To solve real world problems, more advanced models and learning algorithms are proposed. This thesis discusses generalization of LDA model to mixed membership stochastic block model for learning user communities in social network, convolutional dictionary model for learning wordsequence embeddings, hierarchical tensor decomposition and latent tree structure model for learning disease hierarchy, and spatial point process mixture model for detecting cell types in neuroscience.
A Fused Latent and Graphical Model for Multivariate Binary Data ; We consider modeling, inference, and computation for analyzing multivariate binary data. We propose a new model that consists of a low dimensional latent variable component and a sparse graphical component. Our study is motivated by analysis of item response data in cognitive assessment and has applications to many disciplines where item response data are collected. Standard approaches to item response data in cognitive assessment adopt the multidimensional item response theory IRT models. However, human cognition is typically a complicated process and thus may not be adequately described by just a few factors. Consequently, a lowdimensional latent factor model, such as the multidimensional IRT models, is often insufficient to capture the structure of the data. The proposed model adds a sparse graphical component that captures the remaining ad hoc dependence. It reduces to a multidimensional IRT model when the graphical component becomes degenerate. Model selection and parameter estimation are carried out simultaneously through construction of a pseudolikelihood function and properly chosen penalty terms. The convexity of the pseudolikelihood function allows us to develop an efficient algorithm, while the penalty terms generate a lowdimensional latent component and a sparse graphical structure. Desirable theoretical properties are established under suitable regularity conditions. The method is applied to the revised Eysenck's personality questionnaire, revealing its usefulness in item analysis. Simulation results are reported that show the new method works well in practical situations.
Higgs mass from neutrinomessenger mixing ; The discovery of the Higgs particle at 125 GeV has put strong constraints on minimal messenger models of gauge mediation, pushing the stop masses into the multiTeV regime. Extensions of these models with mattermessenger mixing terms have been proposed to generate a large trilinear parameter, At, relaxing these constraints. The detailed survey of these models citeByakti2013ti,Evans2013kxa so far considered messenger mixings with only MSSM superfields. In the present work, we extend the survey to MSSM with inverseseesaw mechanism. The neutrinosneutrino corrections to the Higgs mass in the inverse seesaw model are not significant in the minimal gauge mediation model, unless one considers messengermatter interaction terms. We classify all possible models with messengermatter interactions and perform thorough numerical analysis to find out the promising models. We found that out of the 17 possible models 9 of them can lead to Higgs mass within the observed value without raising the sfermion masses significantly. The successful models have stop masses sim 1.5 TeV with small or negligible mixing and yet a light CP even Higgs at 125 GeV.
When the map is better than the territory ; The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale macro descriptions of causal structure may be useful to observers, they are at best a compressed description and at worse leave out critical information. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus be more informative at a macroscale Hoel et al. 2013. That is, a macro model of a system a map can be more informative than a fully detailed model of the system the territory. This has been called causal emergence. While causal emergence may at first glance seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory Shannon's discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different causal models of those systems take advantage of that capacity to various degrees. For some systems, only macroscale causal models use the full causal capacity. Such macroscale causal models can either be coarsegrains, or may leave variables and states out of the model exogenous in various ways, which can improve the model's efficacy and its informativeness via the same mathematical principles of how errorcorrecting codes take advantage of an information channel's capacity. As model choice increase, the causal capacity of a system approaches the channel capacity. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscopic model.
Spectra of Black Hole Accretion Models of UltraLuminous Xray Sources ; We present general relativistic radiation MHD simulations of superEddington accretion on a 10Modot black hole. We consider a range of mass accretion rates, black hole spins, and magnetic field configurations. We compute the spectra and images of the models as a function of viewing angle, and compare them with the observed properties of ultraluminous Xray sources ULXs. The models easily produce apparent luminosities in excess of 1040rm erg,s1 for poleon observers. However, the angleintegrated radiative luminosities rarely exceed 2.5times1039rm erg,s1 even for mass accretion rates of tens of Eddington. The systems are thus radiatively inefficient, though they are energetically efficient when the energy output in winds and jets is also counted. The simulated models reproduce the main empirical types of spectra disklike, supersoft, soft, hard observed in ULXs. The magnetic field configuration, whether MAD magnetically arrested disk or SANE standard and normal evolution, has a strong effect on the results. In SANE models, the Xray spectral hardness is almost independent of accretion rate, but decreases steeply with increasing inclination. MAD models with nonspinning black holes produce significantly softer spectra at higher values of dotM, even at low inclinations. MAD models with rapidly spinning black holes are quite different from all other models. They are radiatively efficient efficiency factor sim1020, superefficient when the mechanical energy output is also included 70, and produce hard blazarlike spectra. In all models, the emission shows strong geometrical beaming, which disagrees with the more isotropic illumination favored by observations of ULX bubbles.
Computational Model for Predicting Visual Fixations from Childhood to Adulthood ; How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2dimensional saliency maps, saccadic models aim to predict not only where observers look at but also how they move their eyes to explore the scene. Here we demonstrate that saccadic models are a flexible framework that can be tailored to emulate observer's viewing tendencies. More specifically, we use the eye data from 101 observers split in 5 age groups adults, 810 y.o., 68 y.o., 46 y.o. and 2 y.o. to train our saccadic model for different stages of the development of the human visual system. We show that the joint distribution of saccade amplitude and orientation is a visual signature specific to each age group, and can be used to generate agedependent scanpaths. Our agedependent saccadic model not only outputs humanlike, agespecific visual scanpath, but also significantly outperforms other stateoftheart saliency models. In this paper, we demonstrate that the computational modelling of visual attention, through the use of saccadic model, can be efficiently adapted to emulate the gaze behavior of a specific group of observers.
Unifying approach to selective inference with applications to crossvalidation ; We develop tools to do valid postselective inference for a family of model selection procedures, including choosing a model via crossvalidated Lasso. The tools apply universally when the following random vectors are jointly asymptotically multivariate Gaussian 1. the vector composed of each model's quality value evaluated under certain model selection criteria e.g. crossvalidation errors across folds, AIC, prediction errors etc. 2. the test statistics from which we make inference on the parameters; it is worth noting that the parameters here are chosen after model selection methods are performed. Under these assumptions, we derive a pivotal quantity that has an asymptotically Unif0,1 distribution which can be used to perform tests and construct confidence intervals. Both the tests and confidence intervals are selectively valid for the chosen parameter. While the above assumptions may not be satisfied in some applications, we propose a novel variation to these model selection procedures by adding Gaussian randomizations to either one of the two vectors. As a result, the joint distribution of the above random vectors is multivariate Gaussian and our general tools apply. We illustrate our method by applying it to four important procedures for which very few selective inference results have been developed crossvalidated Lasso, crossvalidated randomized Lasso, AICbased model selection among a fixed set of models and inference for a newly introduced novel marginal LOCO parameter, inspired by the LOCO parameter of Rinaldo et al 2016; and we provide complete results for these cases. For randomized model selection procedures, we develop Markov chain Monte Carlo sampling scheme to construct valid postselective confidence intervals empirically.
An Ensemble Approach to Predicting the Impact of Vaccination on Rotavirus Disease in Niger ; Recently developed vaccines provide a new way of controlling rotavirus in subSaharan Africa. Models for the transmission dynamics of rotavirus are critical both for estimating current burden from imperfect surveillance and for assessing potential effects of vaccine intervention strategies. We examine rotavirus infection in the Maradi area in southern Niger using hospital surveillance data provided by Epicentre collected over two years. Additionally, a cluster survey of households in the region allows us to estimate the proportion of children with diarrhea who consulted at a health structure. Model fit and future projections are necessarily particular to a given model; thus, where there are competing models for the underlying epidemiology an ensemble approach can account for that uncertainty. We compare our results across several variants of SusceptibleInfectiousRecovered SIR compartmental models to quantify the impact of modeling assumptions on our estimates. Modelspecific parameters are estimated by Bayesian inference using Markov chain Monte Carlo. We then use Bayesian model averaging to generate ensemble estimates of the current dynamics, including estimates of R0, the burden of infection in the region, as well as the impact of vaccination on both the shortterm dynamics and the longterm reduction of rotavirus incidence under varying levels of coverage. The ensemble of models predicts that the current burden of severe rotavirus disease is 2.9 to 4.1 of the population each year and that a 2dose vaccine schedule achieving 70 coverage could reduce burden by 3743.
Ensemble Adversarial Training Attacks and Defenses ; Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast singlestep methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to blackbox attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel singlestep attack that escapes the nonsmooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks. However, subsequent work found that more elaborate blackbox attacks could significantly enhance transferability and reduce the accuracy of our models.
Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks ; Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the timeseries whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network RNN with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the longrange dynamics. The whole model with event type and timestamp prediction output layers can be trained endtoend. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a blackbox treatment for modeling the intensity which is often a predefined parametric form in point processes. Meanwhile endtoend training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.
A ConcurrencyAgnostic Protocol for MultiParadigm Concurrent Debugging Tools ; Today's complex software systems combine highlevel concurrency models. Each model is used to solve a specific set of problems. Unfortunately, debuggers support only the lowlevel notions of threads and shared memory, forcing developers to reason about these notions instead of the highlevel concurrency models they chose. This paper proposes a concurrencyagnostic debugger protocol that decouples the debugger from the concurrency models employed by the target application. As a result, the underlying language runtime can define custom breakpoints, stepping operations, and execution events for each concurrency model it supports, and a debugger can expose them without having to be specifically adapted. We evaluated the generality of the protocol by applying it to SOMns, a Newspeak implementation, which supports a diversity of concurrency models including communicating sequential processes, communicating event loops, threads and locks, forkjoin parallelism, and software transactional memory. We implemented 21 breakpoints and 20 stepping operations for these concurrency models. For none of these, the debugger needed to be changed. Furthermore, we visualize all concurrent interactions independently of a specific concurrency model. To show that tooling for a specific concurrency model is possible, we visualize actor turns and message sends separately.
Predictive modelling of training loads and injury in Australian football ; To investigate whether training load monitoring data could be used to predict injuries in elite Australian football players, data were collected from elite athletes over 3 seasons at an Australian football club. Loads were quantified using GPS devices, accelerometers and player perceived exertion ratings. Absolute and relative training load metrics were calculated for each player each day rolling average, exponentially weighted moving average, acutechronic workload ratio, monotony and strain. Injury prediction models regularised logistic regression, generalised estimating equations, random forests and support vector machines were built for noncontact, noncontact timeloss and hamstring specific injuries using the first two seasons of data. Injury predictions were generated for the third season and evaluated using the area under the receiver operator characteristic AUC. Predictive performance was only marginally better than chance for models of noncontact and noncontact timeloss injuries AUC0.65. The best performing model was a multivariate logistic regression for hamstring injuries best AUC0.76. Learning curves suggested logistic regression was underfitting the loadinjury relationship and that using a more complex model or increasing the amount of model building data may lead to future improvements. Injury prediction models built using training load data from a single club showed poor ability to predict injuries when tested on previously unseen data, suggesting they are limited as a daily decision tool for practitioners. Focusing the modelling approach on specific injury types and increasing the amount of training data may lead to the development of improved predictive models for injury prevention.
On fR gravity in scalartensor theories ; We study fR gravity models in the language of scalartensor theories. The correspondence between fR gravity and scalartensor theories is revisited since fR gravity is a subclass of BransDicke models, with a vanishing coupling constant omega0. In this treatment, four fR toy models are used to analyze the earlyuniverse cosmology, when the scalar field phi dominates over standard matter. We have obtained solutions to the KleinGordon equation for those models. It is found that for the first model leftfRbeta Rnright, as time increases the scalar field decreases and decays asymptotically. For the second model leftfRalpha Rbeta Rnright it was found that the function phit crosses the taxis at different values for different values of beta. For the third model leftfRRfracnu4Rright, when the value of nu is small the potential Vphi behaves like the standard inflationary potential. For the fourth model leftfRR1mnu2BigfracRnu2Bigm2Lambdaright, we show that there is a transition between 1.5m1.55. The behavior of the potentials with m1.5 is totally different from those with m1.55. The slowroll approximation is applied to each of the four fR models and we obtain the respective expressions for the spectral index ns and the tensortoscalar ratio r.
Interference Modeling in Cognitive Radio Networks A Survey ; One of the fundamental elements impacting the performance of a wireless system is interference, which has been a longterm issue in wireless networks. In the case of cognitive radio CR networks, the problem of interference is tremendously crucial. In other words, CR keeps the important promise of not producing any harmful interference to the primary user PU system. Thus, it is essential to investigate the impact of interference caused to the PUs so that its detrimental effect on the performance of the PU system performance is reduced. Study of cognitive interference generally includes developing a model to statistically demonstrate the power of cognitive interference at the PUs, which then can be utilized to examine different performance measures. Having inspected the different models for channel interference present in the literature, it can be obviously seen that interference models have been gradually evolved in terms of complication and sophistication. Although numerous papers can be found in the literature that have investigated different models for interference, to the best of our knowledge, very few publications are available that provide a review of all models and their comparisons. This paper is a collection of stateoftheart in interference modeling which overviews and compares different models in the literature to provide the valuable insights for researchers when modeling the interference in a specific scenario.
Physiological Gaussian Process Priors for the Hemodynamics in fMRI Analysis ; Background Inference from fMRI data faces the challenge that the hemodynamic system that relates neural activity to the observed BOLD fMRI signal is unknown. New Method We propose a new Bayesian model for task fMRI data with the following features i joint estimation of brain activity and the underlying hemodynamics, ii the hemodynamics is modeled nonparametrically with a Gaussian process GP prior guided by physiological information and iii the predicted BOLD is not necessarily generated by a linear timeinvariant LTI system. We place a GP prior directly on the predicted BOLD response, rather than on the hemodynamic response function as in previous literature. This allows us to incorporate physiological information via the GP prior mean in a flexible way, and simultaneously gives us the nonparametric flexibility of the GP. Results Results on simulated data show that the proposed model is able to discriminate between active and nonactive voxels also when the GP prior deviates from the true hemodynamics. Our model finds time varying dynamics when applied to real fMRI data. Comparison with Existing Methods The proposed model is better at detecting activity in simulated data than standard models, without inflating the false positive rate. When applied to real fMRI data, our GP model in several cases finds brain activity where previously proposed LTI models does not. Conclusions We have proposed a new nonlinear model for the hemodynamics in task fMRI, that is able to detect active voxels, and gives the opportunity to ask new kinds of questions related to hemodynamics.
Level set Cox processes ; The logGaussian Cox process LGCP is a popular point process for modeling noninteracting spatial point patterns. This paper extends the LGCP model to handle data exhibiting fundamentally different behaviors in different subregions of the spatial domain. The aim of the analyst might be either to identify and classify these regions, to perform kriging, or to derive some properties of the parameters driving the random field in one or several of the subregions. The extension is based on replacing the latent Gaussian random field in the LGCP by a latent spatial mixture model. The mixture model is specified using a latent, categorically valued, random field induced by level set operations on a Gaussian random field. Conditional on the classification, the intensity surface for each class is modeled by a set of independent Gaussian random fields. This allows for standard stationary covariance structures, such as the Mat'ern family, to be used to model Gaussian random fields with some degree of general smoothness but also occasional and structured sharp discontinuities. A computationally efficient MCMC method is proposed for Bayesian inference and we show consistency of finite dimensional approximations of the model. Finally, the model is fitted to point pattern data derived from a tropical rainforest on Barro Colorado island, Panama. We show that the proposed model is able to capture behavior for which inference based on the standard LGCP is biased.
Leveraging Deep Neural Network Activation Entropy to cope with Unseen Data in Speech Recognition ; Unseen data conditions can inflict serious performance degradation on systems relying on supervised machine learning algorithms. Because data can often be unseen, and because traditional machine learning algorithms are trained in a supervised manner, unsupervised adaptation techniques must be used to adapt the model to the unseen data conditions. However, unsupervised adaptation is often challenging, as one must generate some hypothesis given a model and then use that hypothesis to bootstrap the model to the unseen data conditions. Unfortunately, reliability of such hypotheses is often poor, given the mismatch between the training and testing datasets. In such cases, a model hypothesis confidence measure enables performing data selection for the model adaptation. Underlying this approach is the fact that for unseen data conditions, data variability is introduced to the model, which the model propagates to its output decision, impacting decision reliability. In a fully connected network, this data variability is propagated as distortions from one layer to the next. This work aims to estimate the propagation of such distortion in the form of network activation entropy, which is measured over a short time running window on the activation from each neuron of a given hidden layer, and these measurements are then used to compute summary entropy. This work demonstrates that such an entropy measure can help to select data for unsupervised model adaptation, resulting in performance gains in speech recognition tasks. Results from standard benchmark speech recognition tasks show that the proposed approach can alleviate the performance degradation experienced under unseen data conditions by iteratively adapting the model to the unseen datas acoustic condition.
Perturbativity Constraints in BSM Models ; Phenomenological studies performed for nonsupersymmetric extensions of the Standard Model usually use treelevel parameters as input to define the scalar sector of the model. This implicitly assumes that a full onshell calculation of the scalar sector is possible and meaningful. However, this doesn't have to be the case as we show explicitly at the example of the GeorgiMachacek model. This model comes with an appealing custodial symmetry to explain the smallness of the rho parameter. However, the model cannot be renormalised onshell without breaking the custodial symmetry. Moreover, we find that it can often happen that the radiative corrections are so large that any consideration based on a perturbative expansion appears to be meaningless counterterms to quartic couplings can become much larger than 4pi andor twoloop mass corrections can become larger than the oneloop ones. Therefore, conditions are necessary to single out parameter regions which cannot be treated perturbatively. We propose and discuss different sets of such perturbativity conditions and show their impact on the parameter space of the GeorgiMachacek model. Moreover, the proposed conditions are general enough that they can be applied to other models as well. We also point out that the vacuum stability constraints in the GeorgiMachacek model, which have so far only been applied at the tree level, receive crucial radiative corrections. We show that large regions of the parameter space which feature a stable electroweak vacuum at the loop level would have been wrongly ruled out by the treelevel conditions.
Finslerian Universe May Reconcile Tensions between High and Low Redshift Probes ; To reconcile the current tensions between high and low redshift observations, we perform the first constraints on the Finslerian cosmological models including the effective dark matter and dark energy components. We find that all the four Finslerian models could alleviate effectively the Hubble constant H0 tension and the amplitude of the rootmeansquare density fluctuations sigma8 tension between the Planck measurements and the local Universe observations at the 68 confidence level. The addition of a massless sterile neutrino and a varying total mass of active neutrinos to the base Finslerian twoparameter model, respectively, reduces the H0 tension from 3.4sigma to 1.9sigma and alleviates the sigma8 tension better than the other three Finslerian models. Computing the Bayesian evidence, with respect to LambdaCDM model, our analysis shows a weak preference for the base Finslerian model and moderate preferences for its three oneparameter extensions. Based on the modelindependent Gaussian Processes, we propose a new linear relation which can describe the current redshift space distortions data very well. Using the most stringent constraints we can provide, we have also obtained the limits of typical model parameters for three oneparameter extensional models.
DataDriven Filtered Reduced Order Modeling Of Fluid Flows ; We propose a datadriven filtered reduced order model DDFROM framework for the numerical simulation of fluid flows. The novel DDFROM framework consists of two steps i In the first step, we use explicit ROM spatial filtering of the nonlinear PDE to construct a filtered ROM. This filtered ROM is lowdimensional, but is not closed because of the nonlinearity in the given PDE. ii In the second step, we use datadriven modeling to close the filtered ROM, i.e., to model the interaction between the resolved and unresolved modes. To this end, we use a quadratic ansatz to model this interaction and close the filtered ROM. To find the new coefficients in the closed filtered ROM, we solve an optimization problem that minimizes the difference between the full order model data and our ansatz. We emphasize that the new DDFROM is built on general ideas of spatial filtering and optimization and is independent of restrictive phenomenological arguments. We investigate the DDFROM in the numerical simulation of a 2D channel flow past a circular cylinder at Reynolds number Re100. The DDFROM is significantly more accurate than the standard projection ROM. Furthermore, the computational costs of the DDFROM and the standard projection ROM are similar, both costs being orders of magnitude lower than the computational cost of the full order model. We also compare the new DDFROM with modern ROM closure models in the numerical simulation of the 1D Burgers equation. The DDFROM is more accurate and significantly more efficient than these ROM closure models.
On the Red Giant Branch Ambiguity in the Surface Boundary Condition Leads to 100 K Uncertainty in Model Effective Temperatures ; The effective temperature Teff distribution of stellar evolution models along the red giant branch RGB is sensitive to a number of parameters including the overall metallicity, elemental abundance patterns, the efficiency of convection, and the treatment of the surface boundary condition. Recently there has been interest in using observational estimates of the RGB Teff to place constraints on the mixing length parameter, aMLT, and possible variation with metallicity. Here we use 1D MESA stellar evolution models to explore the sensitivity of the RGB Teff to the treatment of the surface boundary condition. We find that different surface boundary conditions can lead to 100 K metallicitydependent offsets on the RGB relative to one another in spite of the fact that all models can reproduce the properties of the Sun. Moreover, for a given atmosphere Ttau relation, we find that the RGB Teff is also sensitive to the optical depth at which the surface boundary condition is applied in the stellar model. Nearly all models adopt the photosphere as the location of the surface boundary condition but this choice is somewhat arbitrary. We compare our models to stellar parameters derived from the APOGEEKepler sample of first ascent red giants and find that systematic uncertainties in the models due to treatment of the surface boundary condition place a limit of 100 K below which it is not possible to make firm conclusions regarding the fidelity of the current generation of stellar models.
An energybased analysis of reducedorder models of networked synchronous machines ; Stability of power networks is an increasingly important topic because of the high penetration of renewable distributed generation units. This requires the development of advanced typically modelbased techniques for the analysis and controller design of power networks. Although there are widely accepted reducedorder models to describe the dynamic behavior of power networks, they are commonly presented without details about the reduction procedure, hampering the understanding of the physical phenomena behind them. The present paper aims to provide a modular model derivation of multimachine power networks. Starting from firstprinciple fundamental physics, we present detailed dynamical models of synchronous machines and clearly state the underlying assumptions which lead to some of the standard reducedorder multimachine models, including the classical secondorder swing equations. In addition, the energy functions for the reducedorder multimachine models are derived, which allows to represent the multimachine systems as portHamiltonian systems. Moreover, the systems are proven to be passive with respect to its steady states, which permits for a powerpreserving interconnection with other passive components, including passive controllers. As a result, the corresponding energy function or Hamiltonian can be used to provide a rigorous stability analysis of advanced models for the power network without having to linearize the system.
MultiLabel Image Classification via Knowledge Distillation from WeaklySupervised Detection ; Multilabel image classification is a fundamental but challenging task towards general visual understanding. Existing methods found the regionlevel cues e.g., features from RoIs can facilitate multilabel classification. Nevertheless, such methods usually require laborious objectlevel annotations i.e., object labels and bounding boxes for effective learning of the objectlevel visual features. In this paper, we propose a novel and efficient deep framework to boost multilabel classification by distilling knowledge from weaklysupervised detection task without bounding box annotations. Specifically, given the imagelevel annotations, 1 we first develop a weaklysupervised detection WSD model, and then 2 construct an endtoend multilabel image classification framework augmented by a knowledge distillation module that guides the classification model by the WSD model according to the classlevel predictions for the whole image and the objectlevel visual features for object RoIs. The WSD model is the teacher model and the classification model is the student model. After this crosstask knowledge distillation, the performance of the classification model is significantly improved and the efficiency is maintained since the WSD model can be safely discarded in the test phase. Extensive experiments on two largescale datasets MSCOCO and NUSWIDE show that our framework achieves superior performances over the stateoftheart methods on both performance and efficiency.
FMIT Feature Model Integration Techniques ; Although feature models are widely used in practice, for example, representing variability in software product lines, their integration is still a challenge. Many integration techniques have been proposed, although none of these have proven to be fully effective. Integrating feature models becomes a difficult, costly, errorprone task. Since their transition occurs in a generalized and automated way, the techniques applied to compose the models end up giving rise to a final model, in many cases undesired, without taking into account the specific needs arising from the requirements determined by the analysts and developers. Therefore, this work proposes FMIT, a technique for integrating feature models. The FMIT is based on contemporary model integration strategies to increase the accuracy and quality of the integrated feature model. In this way, it will be possible to identify the degree of similarity between composite feature diagrams, to verify their accuracy, as well as to identify conflicts. In addition, this work proposes the development of a prototype based on the set of strategies, used to take decisions according to the requirements established during the integration of feature models, whether this is semiautomatic or automatic. To evaluate FMIT, experimental studies were conducted with 10 participants, including students and professionals. Participants performed 12 integration scenarios, 6 using the FMIT and 6 manually. The results suggest that FMIT improved accuracy by 43 of the cases, as well as reduced the effort by 70 to perform the integrations.
Position Representation of Effective ElectronElectron Interactions in Solids ; An essential ingredient in many model Hamiltonians, such as the Hubbard model, is the effective electronelectron interaction U, which enters as matrix elements in some localized basis. These matrix elements provide the necessary information in the model, but the localized basis is incomplete for describing U. We present a systematic scheme for computing the manifestly basisindependent dynamical interaction in position representation, Ubf r,bf r';omega, and its Fourier transform to time domain, Ubf r,bf r';tau. These functions can serve as an unbiased tool for the construction of model Hamiltonians. For illustration we apply the scheme within the constrained randomphase approximation to the cuprate parent compounds La2CuO4 and HgBa2CuO4 within the commonly used 1 and 3band models, and to nonsuperconducting SrVO3 within the t2g model. Our method is used to investigate the shape and strength of screening channels in the compounds. We show that the O 2px,yCu 3dx2y2 screening gives rise to regions with strong attractive static interaction in the minimal 1band model in both cuprates. On the other hand, in the minimal t2g model of SrVO3 only regions with a minute attractive interaction are found. The temporal interaction exhibits generic damped oscillations in all compounds, and its timeintegral is shown to be the potential caused by inserting a frozen point charge at tau0. When studying the latter within the threeband model for the cuprates, short time intervals are found to produce a negative potential.
Cognitive Model Priors for Predicting Human Decisions ; Human decisionmaking underlies all economic behavior. For the past four decades, human decisionmaking under uncertainty has continued to be explained by theoretical models based on prospect theory, a framework that was awarded the Nobel Prize in Economic Sciences. However, theoretical models of this kind have developed slowly, and robust, highprecision predictive models of human decisions remain a challenge. While machine learning is a natural candidate for solving these problems, it is currently unclear to what extent it can improve predictions obtained by current theories. We argue that this is mainly due to data scarcity, since noisy human behavior requires massive sample sizes to be accurately captured by offtheshelf machine learning methods. To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets. We offer two contributions towards this end first, we construct cognitive model priors by pretraining neural networks with synthetic data generated by cognitive models i.e., theoretical models developed by cognitive psychologists. We find that finetuning these networks on small datasets of real human decisions results in unprecedented stateoftheart improvements on two benchmark datasets. Second, we present the first largescale dataset for human decisionmaking, containing over 240,000 human judgments across over 13,000 decision problems. This dataset reveals the circumstances where cognitive model priors are useful, and provides a new standard for benchmarking prediction of human decisions under uncertainty.
Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over Simplex ; Estimating the predictive uncertainty of a Bayesian learning model is critical in various decisionmaking problems, e.g., reinforcement learning, detecting adversarial attack, selfdriving car. As the model posterior is almost always intractable, most efforts were made on finding an accurate approximation the true posterior. Even though a decent estimation of the model posterior is obtained, another approximation is required to compute the predictive distribution over the desired output. A common accurate solution is to use Monte Carlo MC integration. However, it needs to maintain a large number of samples, evaluate the model repeatedly and average multiple model outputs. In many realworld cases, this is computationally prohibitive. In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the true uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time. The proposed method is universally applicable to Bayesian classification models that allow for posterior sampling. Theoretically, we show that the idea of amortization incurs no additional costs on approximation performance. Empirical results validate the strong practical performance of our approach.
Enhancing Simple Models by Exploiting What They Already Know ; There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations. In this paper, we propose a novel method SRatio that can utilize information from high performing complex models viz. deep neural networks, boosted trees, random forests to reweight a training dataset for a potentially low performing simple model of much lower complexity such as a decision tree or a shallow network enhancing its performance. Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidencespredictions and is thus conceptually novel. Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network to other commonly used classifiers and incorporate this into our method. The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR10 we outperform competitors in a majority 16 out of 27 of the cases and tie for best performance in the remaining cases. In fact, in a couple of cases, we even approach the complex model's performance. We also conduct further experiments to validate assertions and intuitively understand why our method works. Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model.
Joint Contextual Modeling for ASR Correction and Language Understanding ; The quality of automatic speech recognition ASR is critical to Dialogue Systems as ASR errors propagate to and directly impact downstream tasks such as language understanding LU. In this paper, we propose multitask neural approaches to perform contextual language correction on ASR outputs jointly with LU to improve the performance of both tasks simultaneously. To measure the effectiveness of this approach we used a public benchmark, the 2nd Dialogue State Tracking DSTC2 corpus. As a baseline approach, we trained taskspecific Statistical Language Models SLM and finetuned stateoftheart Generalized Pretraining GPT Language Model to rerank the nbest ASR hypotheses, followed by a model to identify the dialog act and slots. i We further trained ranker models using GPT and Hierarchical CNNRNN models with discriminatory losses to detect the best output given nbest hypotheses. We extended these ranker models to first select the best ASR output and then identify the dialogue act and slots in an end to end fashion. ii We also proposed a novel joint ASR error correction and LU model, a word confusion pointer network WCNPtr with multihead selfattention on top, which consumes the word confusions populated from the nbest. We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14 relative with joint models trained using small amounts of indomain data.
Optimal liquidation trajectories for the AlmgrenChriss model with Levy processes ; We consider an optimal liquidation problem with infinite horizon in the AlmgrenChriss framework, where the unaffected asset price follows a Levy process. The temporary price impact is described by a general function which satisfies some reasonable conditions. We consider an investor with constant absolute risk aversion, who wants to maximise the expected utility of the cash received from the sale of his assets, and show that this problem can be reduced to a deterministic optimisation problem which we are able to solve explicitly. In order to compare our results with exponential Levy models, which provides a very good statistical fit with observed asset price data for short time horizons, we derive the linear Levy process approximation of such models. In particular we derive expressions for the Levy process approximation of the exponential VarianceGamma Levy process, and study properties of the corresponding optimal liquidation strategy. We then provide a comparison of the liquidation trajectories for reasonable parameters between the Levy process model and the classical AlmgrenChriss model. In particular, we obtain an explicit expression for the connection between the temporary impact function for the Levy model and the temporary impact function for the Brownian motion model the classical AlmgrenChriss model, for which the optimal liquidation trajectories for the two models coincide.
Cosmological models, observational data and tension in Hubble constant ; We analyze how predictions of cosmological models depend on a choice of described observational data, restrictions on flatness, and how this choice can alleviate the H0 tension. These effects are demonstrated in the wCDM model in comparison with the standard LambdaCDM model. We describe the Pantheon sample observations of Type Ia supernovae, 31 Hubble parameter data points Hz from cosmic chronometers, the extended sample with 57 Hz data points and observational manifestations of cosmic microwave background radiation CMB. For the wCDM and LambdaCDM models in the flat case and with spatial curvature we calculate chi2 functions for all observed data in different combinations, estimate optimal values of model parameters and their expected intervals. For both considered models the results essentially depend on a choice of data sets. In particular, for the wCDM model with Hz data, supernovae and CMB the 1sigma estimations may vary from H067.520.960.95 km,scdotMpc for all NH57 Hubble parameter data points up to H070.871.631.62 km,scdotMpc for the flat case k0 and NH31. These results might be a hint how to alleviate the problem of H0 tension different estimates of the Hubble constant may be connected with filters and a choice of observational data.
Performance Modeling of Epidemic Routing in Mobile Social Networks with Emphasis on Scalability ; This paper investigates the performance of epidemic routing in mobile social networks. It first analyzes the time taken for a node to meet the first node of a set of nodes restricted to move in a specific subarea. Afterwards, a monolithic Stochastic Reward Net SRN is proposed to evaluate the delivery delay and the average number of transmissions under epidemic routing by considering skewed location visiting preferences. This model is not scalable enough, in terms of the number of nodes and frequently visited locations. In order to achieve higher scalability, the folding technique is applied to the monolithic model, and an approximate folded SRN is proposed to evaluate performance of epidemic routing. Discreteevent simulation is used to validate the proposed models. Both SRN models show high accuracy in predicting the performance of epidemic routing. We also propose an Ordinary Differential Equation ODE model for epidemic routing and compare it with the folded model. Obtained results show that the folded model is more accurate than the ODE model. Moreover, it is proved that the number of transmissions by the time of delivery follows uniform distribution, in a general class of networks, where positions of nodes are always independent and identically distributed.
Directed Acyclic Graphs and causal thinking in clinical risk prediction modeling ; Background In epidemiology, causal inference and prediction modeling methodologies have been historically distinct. Directed Acyclic Graphs DAGs are used to model a priori causal assumptions and inform variable selection strategies for causal questions. Although tools originally designed for prediction are finding applications in causal inference, the counterpart has remained largely unexplored. The aim of this theoretical and simulationbased study is to assess the potential benefit of using DAGs in clinical risk prediction modeling. Methods and Findings We explore how incorporating knowledge about the underlying causal structure can provide insights about the transportability of diagnostic clinical risk prediction models to different settings. A singlepredictor model in the causal direction is likely to have better transportability than one in the anticausal direction. We further probe whether causal knowledge can be used to improve predictor selection. We empirically show that the Markov Blanket, the set of variables including the parents, children, and parents of the children of the outcome node in a DAG, is the optimal set of predictors for that outcome. Conclusions Our findings challenge the generally accepted notion that a change in the distribution of the predictors does not affect diagnostic clinical risk prediction model calibration if the predictors are properly included in the model. Furthermore, using DAGs to identify Markov Blanket variables may be a useful, efficient strategy to select predictors in clinical risk prediction models if strong knowledge of the underlying causal structure exists or can be learned.
An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation ; Compressing deep neural network DNN models becomes a very important and necessary technique for realworld applications, such as deploying those models on mobile devices. Knowledge distillation is one of the most popular methods for model compression, and many studies have been made on developing this technique. However, those studies mainly focused on classification problems, and very few attempts have been made on regression problems, although there are many application of DNNs on regression problems. In this paper, we propose a new formalism of knowledge distillation for regression problems. First, we propose a new loss function, teacher outlier rejection loss, which rejects outliers in training samples using teacher model predictions. Second, we consider a multitask network with two outputs one estimates training labels which is in general contaminated by noisy labels; And the other estimates teacher model's output which is expected to modify the noise labels following the memorization effects. By considering the multitask network, training of the feature extraction of student models becomes more effective, and it allows us to obtain a better student model than one trained from scratch. We performed comprehensive evaluation with one simple toy model sinusoidal function, and two open datasets MPIIGaze, and MultiPIE. Our results show consistent improvement in accuracy regardless of the annotation error level in the datasets.
On multichannel film dosimetry with channelindependent perturbations ; Purpose Different multichannel methods for film dosimetry have been proposed in the literature. Two of them are the weighted mean method and the method put forth by Micke et al and Mayer et al. The purpose of this work was to compare their results and to develop a generalized channelindependent perturbations framework in which both methods enter as special cases. Methods Four models of channelindependent perturbations were compared weighted mean, MickeMayer method, uniform distribution and truncated normal distribution. A closedform formula to calculate film doses and the associated Type B uncertainty for all four models was deduced. To evaluate the models, film dose distributions were compared with planned and measured dose distributions. At the same time, several elements of the dosimetry process were compared film type EBT2 versus EBT3, different waitingtime windows, reflection mode versus transmission mode scanning, and planned versus measured dose distribution for film calibration and for gammaindex analysis. The methods and the models described in this study are publicly accessible through IRISEU. Alpha 1.1 httpwww.iriseu.com. IRISEU. is a cloud computing web application for calibration and dosimetry of radiochromic films. Results The truncated normal distribution model provided the best agreement between film and reference doses, both for calibration and gammaindex verification, and proved itself superior to both the weighted mean model, which neglects correlations between the channels, and the MickeMayer model, whose accuracy depends on the properties of the sensitometric curves. Conclusions The truncated normal distribution model of channelindependent perturbations was found superior to the other three models under comparison and we propose its use for multichannel dosimetry.
Stochastic Weighted Graphs Flexible Model Specification and Simulation ; In most domains of network analysis researchers consider networks that arise in nature with weighted edges. Such networks are routinely dichotomized in the interest of using available methods for statistical inference with networks. The generalized exponential random graph model GERGM is a recently proposed method used to simulate and model the edges of a weighted graph. The GERGM specifies a joint distribution for an exponential family of graphs with continuousvalued edge weights. However, current estimation algorithms for the GERGM only allow inference on a restricted family of model specifications. To address this issue, we develop a MetropolisHastings method that can be used to estimate any GERGM specification, thereby significantly extending the family of weighted graphs that can be modeled with the GERGM. We show that new flexible model specifications are capable of avoiding likelihood degeneracy and efficiently capturing network structure in applications where such models were not previously available. We demonstrate the utility of this new class of GERGMs through application to two real network data sets, and we further assess the effectiveness of our proposed methodology by simulating nondegenerate model specifications from the wellstudied twostars model. A working R version of the GERGM code is available in the supplement and will be incorporated in the gergm CRAN package.
Testing models of vacuum energy interacting with cold dark matter ; We test the models of vacuum energy interacting with cold dark matter and try to probe the possible deviation from the LambdaCDM model using current observations. We focus on two specific models, Q3beta HrhoLambda and Q3beta Hrhoc. The data combinations come from the Planck 2013 data, the baryon acoustic oscillations measurements, the typeIa supernovae data, the Hubble constant measurement, the redshift space distortions data and the galaxy weak lensing data. For the Q3beta Hrhoc model, we find that it can be tightly constrained by all the data combinations, while for the Q3beta HrhoLambda model, there still exist significant degeneracies between parameters. The tightest constraints for the coupling constant are beta0.0260.0360.053 for Q3beta HrhoLambda and beta0.00045pm0.00069 for Q3beta Hrhoc at the 1sigma level. For all the fit results, we find that the null interaction beta0 is always consistent with data. Our work completes the discussion on the interacting dark energy model in the recent Planck 2015 papers. Considering this work together with the Planck 2015 results, it is believed that there is no evidence for the models beyond the standard LambdaCDM model from the point of view of possible interaction.
A microscopic nuclear collective rotationvibration model 2D submodel ; We develop in this article a microscopic version of the successful phenomenological hydrodynamic BohrDavydovFaesslerGreiner BDFG model for the collective rotationvibration motion of a deformed nucleus. The model derivation is not limited to small oscillation amplitudes. The model generalizes the author's previous model to include interaction between collective oscillations in each pair of spatial directions, and to remove many of the previousmodel approximations. To derive the model, the nuclear Schrodinger equation is canonically transformed to collective coordinates and then linearized using a constrained variational method. The associated transformation constraints are imposed on the wavefunction and not on the particle coordinates. This approach yields four selfconsistent, timereversal invariant, crankingtype Schrodinger equations for the rotationvibration and intrinsic motions, and a selfconsistency equation. To facilitate comparison with the BDFG model, simplify the solution of the equations, and gain physical insight, we restrict in this article the collective oscillations to only two space dimensions. For harmonic oscillator meanfield potentials, the equations are then solved in closed forms and applied to the groundstate rotational bands in some eveneven light and rareearth nuclei. The computed groundstate rotational band excitation energy, quadrupole moment and electric quadrupole transition probabilities are found to agree favourably with measured data and the results from meanfield, Sp3,R, and SU3 models.
Effects of low anisotropy on interacting holographic and new agegraphic scalar fields models of dark energy ; A spatially homogeneous and anisotropic Bianchi type I universe is studied with the interacting holographic and new agegraphic scalar fields models of dark energy. Given, the framework of the anisotropic model, both the dynamics and potential of these scalar field models according to the evolutionary behavior of both dark energy models are reconstructed. We also investigate the cosmological evolution of interacting dark energy models, and compare it with observational data and schematic diagram. In order to do so, we focus on observational determinations of the expansion history Hz. Next, we evaluate effects of anisotropy on various topics, such as evolution of the growth of perturbations in the linear regime, statefinder diagnostic, SandageLoeb SL test and distance modulus from holographic and new agegraphic dark energy models and compare the results with standard FRW and LambdaCDM and wCDM models. Our numerical result show the effects of the interaction and anisotropy on the evolutionary behavior the new agegraphic scalar field models
A modeling and simulation language for biological cells with coupled mechanical and chemical processes ; Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, or migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language to describe cellular dynamics must be able to naturally represent complex intra and extracellular spatial structures, and coupled mechanical, chemical and electrical processes. In order to be useful to domain experts, a modeling language should be based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and meshfree Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment.
Discrete Boltzmann method for nonequilibrium flows based on Shakhov model ; A general framework for constructing discrete Boltzmann model for nonequilibrium flows based on the Shakhov model is presented. The Hermite polynomial expansion and a set of discrete velocity with isotropy are adopted to solve the kinetic moments of discrete equilibrium distribution function. Such a model possesses both an adjustable specific heat ratio and Prandtl number, and can be applied to a wide range of flow regimes including continuous, slip, and transition flows. To recover results for actual situations, the nondimensionalization process is demonstrated. To verify and validate the new model, several typical nonequilibrium flows including the Couette flow, Fourier flow, unsteady boundary heating problem, cavity flow, and KelvinHelmholtz instability are simulated. Comparisons are made between the results of discrete Boltzmann model and those of previous models including analytic solution in slip flow, Lattice ESBGK, and DSMC based on both BGK and hardsphere models. The results show that the new model can accurately capture the velocity slip and temperature jump near the wall, and show excellent performance in predicting the nonequilibrium flow even in transition flow regime. In addition, the measurement of nonequilibrium effects is further developed and the nonequilibrium strength Dn in the nth order moment space is defined. The nonequilibrium characteristics and the advantage of using Dn in KelvinHelmholtz instability are discussed. It concludes that the nonequilibrium strength Dn is more appropriate to describe the interfaces than the individual components of Deltan. Besides, the D3 and D3,1 can provide higher resolution interfaces in the simulation of KelvinHelmholtz instability.
Uniqueness for the 3State Antiferromagnetic Potts Model on the Tree ; The antiferromagnetic qstate Potts model is perhaps the most canonical model for which the uniqueness threshold on the tree is not yet understood, largely because of the absence of monotonicities. Jonasson established the uniqueness threshold in the zerotemperature case, which corresponds to the qcolourings model. In the permissive case where the temperature is positive, the Potts model has an extra parameter betain0,1, which makes the task of analysing the uniqueness threshold even harder and much less is known. In this paper, we focus on the case q3 and give a detailed analysis of the Potts model on the tree by refining Jonasson's approach. In particular, we establish the uniqueness threshold on the dary tree for all values of dgeq 2. When dgeq3, we show that the 3state antiferromagnetic Potts model has uniqueness for all betageq 13d1. The case d2 is critical since it relates to the 3colourings model on the binary tree beta0, which has nonuniqueness. Nevertheless, we show that the Potts model has uniqueness for all betain 0,1 on the binary tree. Both of these results are tight since it is known that uniqueness does not hold in the complementary regime. Our proof technique gives for general q3 an analytical condition for proving uniqueness based on the twostep recursion on the tree, which we conjecture to be sufficient to establish the uniqueness threshold for all noncritical cases qneq d1.
Dynamical Systems Analysis in PostFriedmann Parametrizations of Modified Theories of Gravity ; We carry out a dynamical analysis of first order perturbations for Cold Dark Matter, Lambda Cold Dark Matter, and a couple of Modified Gravity models using the Parametrized PostFriedmann formalism. We use normalized variables to set the proper dynamical system of equations through which we make the analysis in order to shed some light on the dynamics of such perturbations inside these models. For Modified Gravity models, we use the scaleindependent and dependent parametrizations, in particular, two fR and two Chameleonlike models are considered within the quasistatic approximation. Given the employed formalism, we found that the critical points and stability features of the dynamical systems for Modified Gravity models are the same as those found in the standard Lambda Cold Dark Matter model. However, the behavior around the critical points suffers important modifications in some specific cases. We explicitly find that signatures of these Modified Gravity models mainly arise on the velocity perturbations, while the density contrast and the curvature potentials turn out to be less sensitive to the parametrization taken into consideration. We also provide a percentage estimation of the extent of modification in the perturbations in the Modified Gravity models considered in comparison to the standard Lambda Cold Dark Matter model along the expansion history and for a couple of wavenumbers.
A Physically based compact IV model for monolayer TMDC channel MOSFET and DMFET biosensor ; In this work, a compact transport model has been developed for monolayer transition metal dichalcogenide TMDC channel MOSFET. The analytical model solves the Poisson's equation for the inversion charge density to get the electrostatic potential in the channel. Current is then calculated by solving the driftdiffusion equation. The model makes gradual channel approximation to simplify the solution procedure. The appropriate density of states obtained from the first principle density functional theory simulation has been considered to keep the model physically accurate for monolayer TMDC channel FET. The outcome of the model has been benchmarked against both experimental and numerical quantum simulation results with the help of a few fitting parameters. Using the compact model, detailed output and transfer characteristics of monolayer WSe2 FET have been studied, and various performance parameters have been determined. The study confirms excellent ON and OFF state performances of monolayer WSe2 FET which could be viable for the next generation highspeed, low power applications. Also, the proposed model has been extended to study the operation of a biosensor. A monolayer MoS2 channel based dielectric modulated FET is investigated using the compact model for detection of a biomolecule in a dry environment.
From Social to Individuals a Parsimonious Path of Multilevel Models for Crowdsourced Preference Aggregation ; In crowdsourced preference aggregation, it is often assumed that all the annotators are subject to a common preference or social utility function which generates their comparison behaviors in experiments. However, in reality annotators are subject to variations due to multicriteria, abnormal, or a mixture of such behaviors. In this paper, we propose a parsimonious mixedeffects model, which takes into account both the fixed effect that the majority of annotators follows a common linear utility model, and the random effect that some annotators might deviate from the common significantly and exhibit strongly personalized preferences. The key algorithm in this paper establishes a dynamic path from the social utility to individual variations, with different levels of sparsity on personalization. The algorithm is based on the Linearized Bregman Iterations, which leads to easy parallel implementations to meet the need of largescale data analysis. In this unified framework, three kinds of random utility models are presented, including the basic linear model with L2 loss, BradleyTerry model, and ThurstoneMosteller model. The validity of these multilevel models are supported by experiments with both simulated and realworld datasets, which shows that the parsimonious multilevel models exhibit improvements in both interpretability and predictive precision compared with traditional HodgeRank.
Context Aware Machine Learning ; We propose a principle for exploring context in machine learning models. Starting with a simple assumption that each observation may or may not depend on its context, a conditional probability distribution is decomposed into two parts contextfree and contextsensitive. Then by employing the loglinear word production model for relating random variables to their embedding space representation and making use of the convexity of natural exponential function, we show that the embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its contextfree and contextsensitive parts, respectively. This simple treatment of context provides a unified view of many existing deep learning models, leading to revisions of these models able to achieve significant performance boost. Specifically, our upgraded version of a recent sentence embedding model not only outperforms the original one by a large margin, but also leads to a new, principled approach for compositing the embeddings of bagofwords features, as well as a new architecture for modeling attention in deep neural networks. More surprisingly, our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore, our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multilayer extension provides a new principle for deep neural networks which subsumes residual network ResNet as its special case, and its extension to convolutional neutral network model accounts for irrelevant input e.g., background in an image in addition to filtering.
A mixed model approach to drought prediction using artificial neural networks Case of an operational drought monitoring environment ; Droughts, with their increasing frequency of occurrence, continue to negatively affect livelihoods and elements at risk. For example, the 2011 in drought in east Africa has caused massive losses document to have cost the Kenyan economy over 12bn. With the foregoing, the demand for exante drought monitoring systems is everincreasing. The study uses 10 precipitation and vegetation variables that are lagged over 1, 2 and 3month timesteps to predict drought situations. In the model space search for the most predictive artificial neural network ANN model, as opposed to the traditional greedy search for the most predictive variables, we use the General Additive Model GAM approach. Together with a set of assumptions, we thereby reduce the cardinality of the space of models. Even though we build a total of 102 GAM models, only 21 have R2 greater than 0.7 and are thus subjected to the ANN process. The ANN process itself uses the bruteforce approach that automatically partitions the training data into 10 subsamples, builds the ANN models in these samples and evaluates their performance using multiple metrics. The results show the superiority of 1month lag of the variables as compared to longer time lags of 2 and 3 months. The champion ANN model recorded an R2 of 0.78 in model testing using the outofsample data. This illustrates its ability to be a good predictor of drought situations 1month ahead. Investigated as a classifier, the champion has a modest accuracy of 66 and a multiclass area under the ROC curve AUROC of 89.99
Fairwashing the risk of rationalization ; Blackbox explanation is the problem of explaining how a machine learning model whose internal logic is hidden to the auditor and generally complex produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair blackbox model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair blackbox model. We empirically evaluate our rationalization technique on blackbox models trained on realworld datasets and show that one can obtain rule lists with high fidelity to the blackbox model while being considerably less unfair at the same time.
A hierarchical Bayesian approach for estimating the origin of a mixed population ; We propose a hierarchical Bayesian model to estimate the proportional contribution of source populations to a newly founded colony. Samples are derived from the first generation offspring in the colony, but mating may occur preferentially among migrants from the same source population. Genotypes of the newly founded colony and source populations are used to estimate the mixture proportions, and the mixture proportions are related to environmental and demographic factors that might affect the colonizing process. We estimate an assortative mating coefficient, mixture proportions, and regression relationships between environmental factors and the mixture proportions in a single hierarchical model. The firststage likelihood for genotypes in the newly founded colony is a mixture multinomial distribution reflecting the colonizing process. The environmental and demographic data are incorporated into the model through a hierarchical prior structure. A simulation study is conducted to investigate the performance of the model by using different levels of population divergence and number of genetic markers included in the analysis. We use Markov chain Monte Carlo MCMC simulation to conduct inference for the posterior distributions of model parameters. We apply the model to a data set derived from grey seals in the Orkney Islands, Scotland. We compare our model with a similar model previously used to analyze these data. The results from both the simulation and application to real data indicate that our model provides better estimates for the covariate effects.
Two Stage Nonpenalized Corrected Least Squares for High Dimensional Linear Models with Measurement error or Missing Covariates ; This paper provides an alternative to penalized estimators for estimation and vari able selection in high dimensional linear regression models with measurement error or missing covariates. We propose estimation via bias corrected least squares after model selection. We show that by separating model selection and estimation, it is possible to achieve an improved rate of convergence of the L2 estimation error compared to the rate sqrts log pn achieved by simultaneous estimation and variable selection methods such as L1 penalized corrected least squares. If the correct model is selected with high probability then the L2 rate of convergence for the proposed method is indeed the oracle rate of sqrtsn. Here s, p are the number of non zero parameters and the model dimension, respectively, and n is the sample size. Under very general model selection criteria, the proposed method is computationally simpler and statistically at least as efficient as the L1 penalized corrected least squares method, performs model selection without the availability of the bias correction matrix, and is able to provide estimates with only a small subblock of the bias correction covariance matrix of order s x s in comparison to the p x p correction matrix required for computation of the L1 penalized version. Furthermore we show that the model selection requirements are met by a correlation screening type method and the L1 penalized corrected least squares method. Also, the proposed methodology when applied to the estimation of precision matrices with missing observations, is seen to perform at least as well as existing L1 penalty based methods. All results are supported empirically by a simulation study.
Inverse Symmetric Inflationary Attractors ; We present a class of inflationary potentials which are invariant under a special symmetry, which depends on the parameters of the models. As we show, in certain limiting cases, the inverse symmetric potentials are qualitatively similar to the alphaattractors models, since the resulting observational indices are identical. However, there are some quantitative differences which we discuss in some detail. As we show, some inverse symmetric models always yield results compatible with observations, but this strongly depends on the asymptotic form of the potential at large efolding numbers. In fact when the limiting functional form is identical to the one corresponding to the alphaattractors models, the compatibility with the observations is guaranteed. Also we find the relation of the inverse symmetric models with the Starobinsky model and we highlight the differences. In addition, an alternative inverse symmetric model is studied and as we show, not all the inverse symmetric models are viable. Moreover, we study the corresponding FR gravity theory and we show that the Jordan frame theory belongs to the R2 attractor class of models. Finally we discuss a nonminimally coupled theory and we show that the attractor behavior occurs in this case too.
HighFidelity Model Order Reduction for Microgrids Stability Assessment ; Proper modeling of inverterbased microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large scale power systems. While detailed models are available, they are both computationally expensive and can not provide the insight into the instability mechanisms and factors. In this paper, a computationally efficient and accurate reducedorder model is proposed for modeling the inverterbased microgrids. The main factors affecting microgrid stability are analyzed using the developed reducedorder model and are shown to be unique for the microgridbased network, which has no direct analogy to largescale power systems. Particularly, it has been discovered that the stability limits for the conventional droopbased system omega PV Q are determined by the ratio of inverter rating to network capacity, leading to a smaller stability region for microgrids with shorter lines. The theoretical derivation has been provided to verify the above investigation based on both the simplified and generalized network configurations. More impor tantly, the proposed reducedorder model not only maintains the modeling accuracy but also enhances the computation efficiency. Finally, the results are verified with the detailed model via both frequency and time domain analyses.
Constrained Sparse Galerkin Regression ; Although major advances have been achieved over the past decades for the reduction and identification of linear systems, deriving nonlinear loworder models still is a chal lenging task. In this work, we develop a new datadriven framework to identify nonlinear reducedorder models of a fluid by combining dimensionality reductions techniques e.g. proper orthogonal decomposition and sparse regression techniques from machine learn ing. In particular, we extend the sparse identification of nonlinear dynamics SINDy algorithm to enforce physical constraints in the regression, namely energypreserving quadratic nonlinearities. The resulting models, hereafter referred to as Galerkin regression models, incorporate many beneficial aspects of Galerkin projection, but without the need for a fullorder or highfidelity solver to project the NavierStokes equations. Instead, the most parsimonious nonlinear model is determined that is consistent with observed mea surement data and satisfies necessary constraints. Galerkin regression models also readily generalize to include higherorder nonlinear terms that model the effect of truncated modes. The effectiveness of Galerkin regression is demonstrated on two different flow configurations the twodimensional flow past a circular cylinder and the sheardriven cavity flow. For both cases, the accuracy of the identified models compare favorably against reducedorder models obtained from a standard Galerkin projection procedure. Present results highlight the importance of cubic nonlinearities in the construction of accurate nonlinear lowdimensional approximations of the flow systems, something which cannot be readily obtained using a standard Galerkin projection of the NavierStokes equations. Finally, the entire code base for our constrained sparse Galerkin regression algorithm is freely available online.
The Quamoco Product Quality Modelling and Assessment Approach ; Published software quality models either provide abstract quality attributes or concrete quality assessments. There are no models that seamlessly integrate both aspects. In the project Quamoco, we built a comprehensive approach with the aim to close this gap. For this, we developed in several iterations a meta quality model specifying general concepts, a quality base model covering the most important quality factors and a quality assessment approach. The meta model introduces the new concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects. Product factors have measures and instruments to operationalise quality by measurements from manual inspection and tool analysis. The base model uses the ISO 25010 quality attributes, which we refine by 200 factors and 600 measures for Java and C systems. We found in several empirical validations that the assessment results fit to the expectations of experts for the corresponding systems. The empirical analyses also showed that several of the correlations are statistically significant and that the maintainability part of the base model has the highest correlation, which fits to the fact that this part is the most comprehensive. Although we still see room for extending and improving the base model, it shows a high correspondence with expert opinions and hence is able to form the basis for repeatable and understandable quality assessments in practice.
A Review of Mathematical Modeling, Simulation and Analysis of Membrane Channel Charge Transport ; The molecular mechanism of ion channel gating and substrate modulation is elusive for many voltage gated ion channels, such as eukaryotic sodium ones. The understanding of channel functions is a pressing issue in molecular biophysics and biology. Mathematical modeling, computation and analysis of membrane channel charge transport have become an emergent field and give rise to significant contributions to our understanding of ion channel gating and function. This review summarizes recent progresses and outlines remaining challenges in mathematical modeling, simulation and analysis of ion channel charge transport. One of our focuses is the PoissonNernstPlanck PNP model and its generalizations. Specifically, the basic framework of the PNP system and some of its extensions, including size effects, ionwater interactions, coupling with density functional theory and relation to fluid flow models. A reduced theory, the Poisson BoltzmannNernstPlanck PBNP model, and a differential geometry based ion transport model are also discussed. For proton channel, a multiscale and multiphysics PoissonBoltzmannKohnSham PBKS model is presented. We show that all of these ion channel models can be cast into a unified variational multiscale framework with a macroscopic continuum domain of the solvent and a microscopic discrete domain of the solute. The main strategy is to construct a total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation and chemical potential related energies. Current computational algorithms and tools for numerical simulations and results from mathematical analysis of ion channel systems are also surveyed.
mathfrakglM, mathfrakglNDualities in Gaudin Models with Irregular Singularities ; We establish mathfrakglM, mathfrakglNdualities between quantum Gaudin models with irregular singularities. Specifically, for any M, N in mathbb Zgeq 1 we consider two Gaudin models the one associated with the Lie algebra mathfrakglM which has a double pole at infinity and N poles, counting multiplicities, in the complex plane, and the same model but with the roles of M and N interchanged. Both models can be realized in terms of Weyl algebras, i.e., free bosons; we establish that, in this realization, the algebras of integrals of motion of the two models coincide. At the classical level we establish two further generalizations of the duality. First, we show that there is also a duality for realizations in terms of free fermions. Second, in the bosonic realization we consider the classical cyclotomic Gaudin model associated with the Lie algebra mathfrakglM and its diagram automorphism, with a double pole at infinity and 2N poles, counting multiplicities, in the complex plane. We prove that it is dual to a noncyclotomic Gaudin model associated with the Lie algebra mathfraksp2N, with a double pole at infinity and M simple poles in the complex plane. In the special case N1 we recover the wellknown selfduality in the Neumann model.
mathbfZn clock models and chains of son2 nonAbelian anyons symmetries, integrable points and low energy properties ; We study two families of quantum models which have been used previously to investigate the effect of topological symmetries in onedimensional correlated matter. Various striking similarities are observed between certain mathbfZn quantum clock models, spin chains generalizing the Ising model, and chains of nonAbelian anyons constructed from the son2 fusion category for odd n, both subject to periodic boundary conditions. In spite of the differences between these two types of quantum chains, e.g. their Hilbert spaces being spanned by tensor products of local spin states or fusion paths of anyons, the symmetries of the lattice models are shown to be closely related. Furthermore, under a suitable mapping between the parameters describing the interaction between spins and anyons the respective Hamiltonians share part of their energy spectrum although their degeneracies may differ. This spinanyon correspondence can be extended by finetuning of the coupling constants leading to exactly solvable models. We show that the algebraic structures underlying the integrability of the clock models and the anyon chain are the same. For n3,5,7 we perform an extensive finite size study both numerical and based on the exact solution of these models to map out their ground state phase diagram and to identify the effective field theories describing their low energy behaviour. We observe that the continuum limit at the integrable points can be described by rational conformal field theories with extended symmetry algebras which can be related to the discrete ones of the lattice models.
Train, Diagnose and Fix Interpretable Approach for Finegrained Action Recognition ; Despite the growing discriminative capabilities of modern deep learning methods for recognition tasks, the inner workings of the stateofart models still remain mostly blackboxes. In this paper, we propose a systematic interpretation of model parameters and hidden representations of Residual Temporal Convolutional Networks ResTCN for action recognition in timeseries data. We also propose a Feature Map Decoder as part of the interpretation analysis, which outputs a representation of model's hidden variables in the same domain as the input. Such analysis empowers us to expose model's characteristic learning patterns in an interpretable way. For example, through the diagnosis analysis, we discovered that our model has learned to achieve viewpoint invariance by implicitly learning to perform rotational normalization of the input to a more discriminative view. Based on the findings from the model interpretation analysis, we propose a targeted refinement technique, which can generalize to various other recognition models. The proposed work introduces a threestage paradigm for model learning training, interpretable diagnosis and targeted refinement. We validate our approach on skeleton based 3D human action recognition benchmark of NTU RGBD. We show that the proposed workflow is an effective model learning strategy and the resulting Multistream Residual Temporal Convolutional Network MSResTCN achieves the stateoftheart performance on NTU RGBD.
A Novel Model for Arbitration between Planning and Habitual Control Systems ; It is well established that humans decision making and instrumental control uses multiple systems, some which use habitual action selection and some which require deliberate planning. Deliberate planning systems use predictions of actionoutcomes using an internal model of the agent's environment, while habitual action selection systems learn to automate by repeating previously rewarded actions. Habitual control is computationally efficient but may be inflexible in changing environments. Conversely, deliberate planning may be computationally expensive, but flexible in dynamic environments. This paper proposes a general architecture comprising both control paradigms by introducing an arbitrator that controls which subsystem is used at any time. This system is implemented for a targetreaching task with a simulated twojoint robotic arm that comprises a supervised internal model and deep reinforcement learning. Through permutation of targetreaching conditions, we demonstrate that the proposed is capable of rapidly learning kinematics of the system without a priori knowledge, and is robust to A changing environmental reward and kinematics, and B occluded vision. The arbitrator model is compared to exclusive deliberate planning with the internal model and exclusive habitual control instances of the model. The results show how such a model can harness the benefits of both systems, using fast decisions in reliable circumstances while optimizing performance in changing environments. In addition, the proposed model learns very fast. Finally, the system which includes internal models is able to reach the target under the visual occlusion, while the pure habitual system is unable to operate sufficiently under such conditions.
Iterated filtering methods for Markov process epidemic models ; Dynamic epidemic models have proven valuable for public health decision makers as they provide useful insights into the understanding and prevention of infectious diseases. However, inference for these types of models can be difficult because the disease spread is typically only partially observed e.g. in form of reported incidences in given time periods. This chapter discusses how to perform likelihoodbased inference for partially observed Markov epidemic models when it is relatively easy to generate samples from the Markov transmission model while the likelihood function is intractable. The first part of the chapter reviews the theoretical background of inference for partially observed Markov processes POMP via iterated filtering. In the second part of the chapter the performance of the method and associated practical difficulties are illustrated on two examples. In the first example a simulated outbreak data set consisting of the number of newly reported cases aggregated by week is fitted to a POMP where the underlying disease transmission model is assumed to be a simple Markovian SIR model. The second example illustrates possible model extensions such as seasonal forcing and overdispersion in both, the transmission and observation model, which can be used, e.g., when analysing routinely collected rotavirus surveillance data. Both examples are implemented using the Rpackage pomp King et al., 2016 and the code is made available online.
Analytical Modeling of Metal Gate Granularity based Threshold Voltage Variability in NWFET ; Estimation of threshold voltage V T variability for NWFETs has been compu tationally expensive due to lack of analytical models. Variability estimation of NWFET is essential to design the next generation logic circuits. Compared to any other process induced variabilities, Metal Gate Granularity MGG is of paramount importance due to its large impact on V T variability. Here, an analytical model is proposed to estimate V T variability caused by MGG. We extend our earlier FinFET based MGG model to a cylindrical NWFET by sat isfying three additional requirements. First, the gate dielectric layer is replaced by Silicon of electrostatically equivalent thickness using long cylinder approxi mation; Second, metal grains in NWFETs satisfy periodic boundary condition in azimuthal direction; Third, electrostatics is analytically solved in cylindri cal polar coordinates with gate boundary condition defined by MGG. We show that quantum effects only shift the mean of the V T distribution without sig nificant impact on the variability estimated by our electrostaticsbased model. The V T distribution estimated by our model matches TCAD simulations. The model quantitatively captures grain size dependence with sigmaV T with excellent accuracy 6error compared to stochastic 3D TCAD simulations, which is a significant improvement over the stateof theart model with fails to produce even a qualitative agreement. The proposed model is 63 times faster compared to commercial TCAD simulations.
Dynamic Island Model based on Spectral Clustering in Genetic Algorithm ; How to maintain relative high diversity is important to avoid premature convergence in populationbased optimization methods. Island model is widely considered as a major approach to achieve this because of its flexibility and high efficiency. The model maintains a group of subpopulations on different islands and allows subpopulations to interact with each other via predefined migration policies. However, current island model has some drawbacks. One is that after a certain number of generations, different islands may retain quite similar, converged subpopulations thereby losing diversity and decreasing efficiency. Another drawback is that determining the number of islands to maintain is also very challenging. Meanwhile initializing many subpopulations increases the randomness of island model. To address these issues, we proposed a dynamic island modelDIMSP which can force each island to maintain different subpopulations, control the number of islands dynamically and starts with one subpopulation. The proposed island model outperforms the other three stateoftheart island models in three baseline optimization problems including job shop scheduler problem, travelling salesmen problem and quadratic multiple knapsack problem.
A Lagrangian probabilitydensityfunction model for collisional turbulent fluidparticle flows. II. Application to homogeneous flows ; The Lagrangian probabilitydensityfunction model, proposed in Part I for dense particleladen turbulent flows, is validated here against EulerianLagrangian direct numerical simulation EL data for different homogeneous flows, namely statistically steady and decaying homogeneous isotropic turbulence, homogeneousshear flow and clusterinduced turbulence CIT. We consider the general model developed in Part I adapted to the homogeneous case together with a simplified version in which the decomposition of the phaseaveraged PA particlephase fluctuating energy into the spatially correlated and uncorrelated components is not used, and only total exchange of kinetic energy between phases is allowed. The simplified model employs the standard twoway coupling approach. The comparison between EL simulations and the two stochastic models in homogeneous and isotropic turbulence and in homogeneousshear flow shows that in all cases both models are capable to reproduce rather well the flow behaviour, notably for dilute flows. The analysis of the CIT gives more insights on the physical nature of such systems and about the quality of the models. Results elucidate the fact that simple twoway coupling is sufficient to induce turbulence, even though the granular energy is not considered. Furthermore, firstorder moments including velocity of the fluid seen by particles can be fairly well represented with such a simplified stochastic model. However, the decomposition into spatially correlated and uncorrelated components is found to be necessary to account for anisotropic energy exchanges. When these factors are properly accounted for as in the complete model, the agreement with the EL statistics is satisfactory up to second order.