text
stringlengths
62
2.94k
Bigalileon theory II phenomenology ; We continue to introduce bigalileon theory, the generalisation of the single galileon model introduced by Nicolis et al. The theory contains two coupled scalar fields and is described by a Lagrangian that is invariant under Galilean shifts in those fields. This paper is the second of two, and focuses on the phenomenology of the theory. We are particularly interesting in models that admit solutions that are asymptotically self accelerating or asymptotically self tuning. In contrast to the single galileon theories, we find examples of self accelerating models that are simultaneously free from ghosts, tachyons and tadpoles, able to pass solar system constraints through Vainshtein screening, and do not suffer from problems with superluminality, Cerenkov emission or strong coupling. We also find self tuning models and discuss how Weinberg's no go theorem is evaded by breaking Poincar'e invariance in the scalar sector. Whereas the galileon description is valid all the way down to solar system scales for the selfaccelerating models, unfortunately the same cannot be said for self tuning models owing to the scalars backreacting strongly on to the geometry.
How to Falsify the GRLambdaCDM Model with Galaxy Redshift Surveys ; A wide range of models describing modifications to General Relativity have been proposed, but no fundamental parameter set exists to describe them. Similarly, no fundamental theory exists for dark energy to parameterize its potential deviation from a cosmological constant. This motivates a modelindependent search for deviations from the concordance GRLambdaCDM cosmological model in large galaxy redshift surveys. We describe two modelindependent tests of the growth of cosmological structure, in the form of quantities that must equal one if GRLambdaCDM is correct. The first, epsilon, was introduced previously as a scaleindependent consistency check between the expansion history and structure growth. The second, upsilon, is introduced here as a test of scaledependence in the linear evolution of matter density perturbations. We show that the ongoing and nearfuture galaxy redshift surveys WiggleZ, BOSS, and HETDEX will constrain these quantities at the 510 level, representing a stringent test of concordance cosmology at different redshifts. When redshift space distortions are used to probe the growth of cosmological structure, galaxies at higher redshift with lower bias are found to be most powerful in detecting deviations from the GRLambdaCDM model.
A Note on the Inverse Problem with LTB Universes ; The inverse problem with LemaitreTolmanBondi LTB universe models is discussed. The LTB solution for the Einstein equations describes the spherically symmetric dustfilled spacetime. The LTB solution has two physical functional degrees of freedom of the radial coordinate. The inverse problem is constructing an LTB model requiring that the LTB model be consistent with selected important observational data. In this paper, we assume that the observer is at the center and consider the distanceredshift relation da and the redshiftspace mass density mu as the selected important observational data. We give da and mu as functions of the redshift z. Then, we explicitly show that, for general functional forms of daz and muz, the regular solution does not necessarily exist in the whole redshift domain. We also show that the condition for the existence of the regular solution in terms of daz and muz is satisfied by the distanceredshift relation and the redshiftspace mass density in LambdaCDM models. Deriving regular differential equations for the inverse problem with the distanceredshift relation and the redshiftspace mass density in LambdaCDM models, we numerically solve them for the case Omegarm M0,OmegaLambda00.3,0.7. A set of analytic fitting functions for the resultant LTB universe model is given. How to solve the inverse problem with the simultaneous bigbang and a given function daz for the distanceredshift relation is provided in the Appendix.
Cosmography of fR brane cosmology ; Cosmography is a useful tool to constrain cosmological models, in particular dark energy models. In the case of modified theories of gravity, where the equations of motion are generally quite complicated, cosmography can contribute to select realistic models without imposing arbitrary choices a priori. Indeed, its reliability is based on the assumptions that the universe is homogeneous and isotropic on large scale and luminosity distance can be tracked by the derivative series of the scale factor at. We apply this approach to induced gravity braneworld models where an fRterm is present in the brane effective action. The virtue of the model is to selfaccelerate the normal and healthy DGP branch once the fRterm deviates from the HilbertEinstein action. We show that the model, coming from a fundamental theory, is consistent with the LCDM scenario at low redshift. We finally estimate the cosmographic parameters fitting the Union2 Type Ia Supernovae SNeIa dataset and the distance priors from Baryon Acoustic Oscillations BAO and then provide constraints on the present day values of fR and its second and third derivatives.
General Nonstructure Theory ; The theme of the first two sections, is to prepare the framework of how from a complicated family of index models I in K1 we build many andor complicated structures in a class K2. The index models are characteristically linear orders, trees with kappa1 levels possibly with linear order on the set of successors of a member and linearly ordered graph, for this we phrase relevant complicatedness properties called bigness. We say when M in K2 is represented in I in K1. We give sufficient conditions when MIIin K1lambda is complicated where for each I in K1lambda we build MI in K2 usually in K2lambda represented in it and reflecting to some degree its structure e.g. for I a linear order we can build a model of an unstable first order class reflecting the order. If we understand enough we can even build e.g. rigid members of K2lambda. Note that we mention stable, superstable, but in a self contained way, using an equivalent definition which is useful here and explicitly given. We also frame the use of generalizations of Ramsey and ErdosRado theorems to get models in which any I from the relevant K1 is reflected. We give in some detail how this may apply to the class of separable reduced Abelian pgroup and how we get relevant models for ordered graphs via forcing. In the third section we show stronger results concerning linear orders. If for each linear order I of cardinality lambdaaleph0 we can attach a model MI in Klambda in which the linear order can be embedded so that for enough cuts of I, their being omitted is reflected in MI, then there are 2lambda nonisomorphic cases.
Background independent condensed matter models for quantum gravity ; A number of recent proposals for a quantum theory of gravity are based on the idea that spacetime geometry and gravity are derivative concepts and only apply at an approximate level. There are two fundamental challenges to any such approach. At the conceptual level, there is a clash between the timelessness of general relativity and emergence. Second, the lack of a fundamental spacetime makes difficult the straightforward application of wellknown methods of statistical physics to the problem. We recently initiated a study of such problems using spin systems based on evolution of quantum networks with no a priori geometric notions as models for emergent geometry and gravity. In this article we review two such models. The first is a model of emergent flat space and matter and we show how to use methods from quantum information theory to derive features such as speed of light from a nongeometric quantum system. The second model exhibits interacting matter and geometry, with the geometry defined by the behavior of matter. This model has primitive notions of gravitational attraction which we illustrate with a toy black hole, and exhibits entanglement between matter and geometry and thermalization of the quantum geometry.
Likesign dimuon charge asymmetry in RandallSundurm model ; We confirm that in order to account for the recent DO result of large likesign dimuon charge asymmetry, a considerable large new physics effect in Gamma12s is required in addition to a large CP violating phase in Bs barBs mixing. In the RandallSundrum model of warped geometry, where the fermion fields reside in the bulk, new sources of flavor and CP violation are obtained. We analyze the likesign dimuon asymmetry in this class of model, as an example of the desired new physics. We show that the wrong charge asymmetry, asls, which is related to the dimuon asymmetry, is significantly altered compared to the Standard Model value. However, experimental limits from Delta Ms, DeltaGammas as well as K mixing and electroweak corrections constrain it to be greater than a sigma away from its experimental average value. This model cannot fully account for the DO anomaly due to its inability to generate sufficient new contribution to the width difference Gammas12, even though the model can generate large contribution to the mass difference Ms12.
Constraints on scalartensor theories of gravity from observations ; In spite of their original discrepancy, both dark energy and modified theory of gravity can be parameterized by the effective equation of state EOS omega for the expansion history of the Universe. A useful model independent approach to the EOS of them can be given by socalled ChevallierPolarskiLinder CPL parametrization where two parameters of it omega0 and omegaa can be constrained by the geometrical observations which suffer from degeneracies between models. The linear growth of large scale structure is usually used to remove these degeneracies. This growth can be described by the growth index parameter gamma and it can be parameterized by gamma0 gammaa 1 a in general. We use the scalartensor theories of gravity STG and show that the discernment between models is possible only when gammaa is not negligible. We show that the linear density perturbation of the matter component as a function of redshift severely constrains the viable subclasses of STG in terms of omega and gamma. From this method, we can rule out or prove the viable STG in future observations. When we use Zphi 1, F shows the convex shape of evolution in a viable STG model. The viable STG models with Zphi 1 are not distinguishable from dark energy models when we strongly limit the solar system constraint.
A nonlinear scalar model of extreme mass ratio inspirals in effective field theory I. Self force through third order ; The motion of a small compact object in a background spacetime is investigated in the context of a model nonlinear scalar field theory. This model is constructed to have a perturbative structure analogous to the General Relativistic description of extreme mass ratio inspirals EMRIs. We apply the effective field theory approach to this model and calculate the finite part of the self force on the small compact object through third order in the ratio of the size of the compact object to the curvature scale of the background e.g., black hole spacetime. We use wellknown renormalization methods and demonstrate the consistency of the formalism in rendering the self force finite at higher orders within a point particle prescription for the small compact object. This nonlinear scalar model should be useful for studying various aspects of higherorder self force effects in EMRIs but within a comparatively simpler context than the full gravitational case. These aspects include developing practical schemes for higher order self force numerical computations, quantifying the effects of transient resonances on EMRI waveforms and accurately modeling the small compact object's motion for precise determinations of the parameters of detected EMRI sources.
Optimal Trispectrum Estimators and WMAP Constraints ; We present an implementation of an optimal CMB trispectrum estimator which accounts for anisotropic noise and incomplete sky coverage. We use a general separable mode expansion which can and has been applied to constrain both primordial and latetime models. We validate our methods on large angular scales using known analytic results in the SachsWolfe limit. We present the first nearoptimal trispectrum constraints from WMAP data on the cubic term of local model inflation grm NL 1.6 pm 7.0times 105, for the equilateral model trm NLrmequil3.11pm 7.5times 106 and for the constant model trm NLrmconst1.33pm 3.62. These results, particularly the equilateral constraint, are relevant to a number of wellmotivated models such as DBI and Kinflation with closely correlated trispectrum shapes. We also use the trispectrum signal predicted for cosmic strings to provide a conservative upper limit on the string tension Gmu le 1.1times 106 at 95 confidence, which is largely background and model independent. All these new trispectrum results are consistent with a Gaussian Universe. We discuss the importance of constraining general classes of trispectra using these methods and the prospects for higher precision with the Planck satellite.
Accurate numerical simulations of inspiralling binary neutron stars and their comparison with effectiveonebody analytical models ; Binary neutronstar systems represent one of the most promising sources of gravitational waves. In order to be able to extract important information, notably about the equation of state of matter at nuclear density, it is necessary to have in hands an accurate analytical model of the expected waveforms. Following our recent work, we here analyze more in detail two generalrelativistic simulations spanning about 20 gravitationalwave cycles of the inspiral of equalmass binary neutron stars with different compactnesses, and compare them with a tidal extension of the effectiveonebody EOB analytical model. The latter tidally extended EOB model is analytically complete up to the 1.5 postNewtonian level, and contains an analytically undetermined parameter representing a higherorder amplification of tidal effects. We find that, by calibrating this single parameter, the EOB model can reproduce, within the numerical error, the two numerical waveforms essentially up to the merger. By contrast, analytical models either EOB, or TaylorT4 that do not incorporate such a higherorder amplification of tidal effects, build a dephasing with respect to the numerical waveforms of several radians.
Constraining Galileon gravity from observational data with growth rate ; We studied the cosmological constraints on the Galileon gravity obtained from observational data of the growth rate of matter density perturbations, the supernovae Ia SN Ia, the cosmic microwave background CMB, and baryon acoustic oscillations BAO. For the same value of the energy density parameter of matter Omegam,0, the growth rate f in Galileon models is enhanced, relative to the LambdaCDM case, because of an increase in Newton's constant. The smaller Omegam,0 is, the more growth rate is suppressed. Therefore, the best fit value of Omegam,0 in the Galileon model, based only the growth rate data, is quite small. This is incompatible with the value of Omegam,0 obtained from the combination of SN Ia, CMB, and BAO data. On the other hand, in the LambdaCDM model, the values of Omegam,0 obtained from different observational data sets are consistent. In the analysis of this paper, we found that the Galileon model is less compatible with observations than the LambdaCDM model. This result seems to be qualitatively the same in most of the generalized Galileon models in which Newton's constant is enhanced.
Modeling Gaussian Random Fields by Anchored Inversion and Monte Carlo Sampling ; It is common and convenient to treat distributed physical parameters as Gaussian random fields and model them in an inverse procedure using measurements of various properties of the fields. This article presents a general method for this problem based on a flexible parameterization device called anchors, which captures local or global features of the fields. A classification of all relevant data into two categories closely cooperates with the anchor concept to enable systematic use of datasets of different sources and disciplinary natures. In particular, nonlinearity in the forward models is handled automatically. Treatment of measurement and model errors is systematic and integral in the method; however the method is also suitable in the usual setting where one does not have reliable information about these errors. Compared to a statespace approach, the anchor parameterization renders the task in a parameter space of radically reduced dimension; consequently, easier and more rigorous statistical inference, interpretation, and sampling are possible. A procedure for deriving the posterior distribution of model parameters is presented. Based on Monte Carlo sampling and normal mixture approximation to highdimensional densities, the procedure has generality and efficiency features that provide a basis for practical implementations of this computationally demanding inverse procedure. We emphasize distinguishing features of the method compared to statespace approaches and optimizationbased ideas. Connections with existing methods in stochastic hydrogeology are discussed. The work is illustrated by a onedimensional synthetic problem. Key words anchored inversion, Gaussian process, illposedness, model error, state space, pilot point method, stochastic hydrogeology.
The noisy edge of traveling waves ; Traveling waves are ubiquitous in nature and control the speed of many important dynamical processes, including chemical reactions, epidemic outbreaks, and biological evolution. Despite their fundamental role in complex systems, traveling waves remain elusive because they are often dominated by rare fluctuations in the wave tip, which have defied any rigorous analysis so far. Here, we show that by adjusting nonlinear model details, noisy traveling waves can be solved exactly. The moment equations of these tuned models are closed and have a simple analytical structure resembling the deterministic approximation supplemented by a nonlocal cutoff term. The peculiar form of the cutoff shapes the noisy edge of traveling waves and is critical for the correct prediction of the wave speed and its fluctuations. Our approach is illustrated and benchmarked using the example of fitness waves arising in simple models of microbial evolution, which are highly sensitive to number fluctuations. We demonstrate explicitly how these models can be tuned to account for finite population sizes and determine how quickly populations adapt as a function of population size and mutation rates. More generally, our method is shown to apply to a broad class of models, in which number fluctuations are generated by branching processes. Because of this versatility, the method of model tuning may serve as a promising route toward unraveling universal properties of complex discrete particle systems.
Dynamic Large Spatial Covariance Matrix Estimation in Application to Semiparametric Model Construction via Variable Clustering the SCE approach ; To better understand the spatial structure of large panels of economic and financial time series and provide a guideline for constructing semiparametric models, this paper first considers estimating a large spatial covariance matrix of the generalized mdependent and betamixing time series with J variables and T observations by hard thresholding regularization as long as log J , cxctT Co1 the former scheme with some time dependence measure cxct or log J T Co1 the latter scheme with some upper bounded mixing coefficient. We quantify the interplay between the estimators' consistency rate and the time dependence level, discuss an intuitive resampling scheme for threshold selection, and also prove a general crossvalidation result justifying this. Given a consistently estimated covariance correlation matrix, by utilizing its natural links with graphical models and semiparametrics, after screening the explanatory variables, we implement a novel forward and backward label permutation procedure to cluster the relevant variables and construct the corresponding semiparametric model, which is further estimated by the groupwise dimension reduction method with sign constraints. We call this the SCE screen cluster estimate approach for modeling high dimensional data with complex spatial structure. Finally we apply this method to study the spatial structure of large panels of economic and financial time series and find the proper semiparametric structure for estimating the consumer price index CPI to illustrate its superiority over the linear models.
Gravitational Waves in Viable fR Models ; We study gravitational waves in viable fR theories under a nonzero background curvature. In general, an fR theory contains an extra scalar degree of freedom corresponding to a massive scalar mode of gravitational wave. For viable fR models, since there always exits a deSitter point where the background curvature in vacuum is nonzero, the mass squared of the scalar mode of gravitational wave is about the deSitter point curvature Rdsim1066eV2. We illustrate our results in two types of viable fR models the exponential gravity and Starobinsky models. In both cases, the mass will be in the order of 1033eV when it propagates in vacuum. However, in the presence of matter density in galaxy, the scalar mode can be heavy. Explicitly, in the exponential gravity model, the mass becomes almost infinity, implying the disappearance of the scalar mode of gravitational wave, while the Starobinsky model gives the lowest mass around 1024eV, corresponding to the lowest frequency of 109 Hz, which may be detected by the current and future gravitational wave probes, such as LISA and ASTRODGW.
The minimal 331 model with only two Higgs triplets ; The simplest nonabelian gauge extension of the electroweak standard model, the SU3cotimes SU3Lotimes U1N, known as 331 model, has a minimal version which demands the least possible fermionic content to account for the whole established phenomenology for the well known particles and interactions. Nevertheless, in its original form the minimal 331 model was proposed with a set of three scalar triplets and one sextet in order to yield the spontaneous breaking of the gauge symmetry and generate the observed fermion masses. Such a huge scalar sector turns the task of clearly identifying the physical scalar spectrum a clumsy labor. It not only adds an obstacle for the development of its phenomenology but implies a scalar potential plagued with new free coupling constants. In this work we show that the framework of the minimal 331 model can be built with only two scalar triplets, but still triggering the desired pattern of spontaneous symmetry breaking and generating the correct fermion masses. We present the exact physical spectrum and also show all the interactions involving the scalars, obtaining a neat minimal 331 model far more suited for phenomenological studies at the current Large Hadron Collider.
Towards a unified model of stellar rotation ; The effects of rapid rotation on stellar evolution can be profound. We are now beginning to gather enough data to allow a realistic comparison between different physical models. Two key tests for any theory of stellar rotation are first whether it can match observations of the enrichment of nitrogen, and potentially other elements, in clusters containing rapid rotators and secondly whether it can reproduce the observed broadening of the main sequence in the HertzsprungRussel diagram. Models of stellar rotation have been steadily increasing in number and complexity over the past two decades but the lack of data makes it difficult to determine whether such additions actually give a closer reflection of reality. One of the most poorly explored features of stellar rotation models is the treatment of angular momentum transport within convective zones. If we treat the core as having uniform specific angular momentum the angular momentum distribution in the star, for a given surface rotation, is dramatically different from what it is when we assume the star rotates as a solid body. The uniform specific angular momentum also generates strong shears which can drive additional transport of chemical elements close to the boundary of a convection zone. A comparison of different models and their reproduction of observable properties with otherwise identical input physics is essential to properly distinguish between them. We compare detailed grids of stellar evolution tracks of intermediate and highmass stars produced using several models for rotation generated with our new stellar rotation code.
Modeling Techniques for Measuring Galaxy Properties in MultiEpoch Surveys ; Data analysis methods have always been of critical importance for quantitative sciences. In astronomy, the increasing scale of current and future surveys is driving a trend towards a separation of the processes of lowlevel data reduction and higherlevel scientific analysis. Algorithms and software responsible for the former are becoming increasingly complex, and at the same time more general measurements will be used for a wide variety of scientific studies, and many of these cannot be anticipated in advance. On the other hand, increased sample sizes and the corresponding decrease in stochastic uncertainty puts greater importance on controlling systematic errors, which must happen for the most part at the lowest levels of data analysis. Astronomical measurement algorithms must improve in their handling of uncertainties as well, and hence must be designed with detailed knowledge of the requirements of different science goals. In this thesis, we advocate a Bayesian approach to survey data reduction as a whole, and focus specifically on the problem of modeling individual galaxies and stars. We present a Monte Carlo algorithm that can efficiently sample from the posterior probability for a flexible class of galaxy models, and propose a method for constructing and convolving these models using GaussHermite shapelet functions. These methods are designed to be efficient in a multiepoch modeling multifit sense, in which we compare a generative model to each exposure rather than combining the data from multiple exposures in advance. We also discuss how these methods are important for specific higherlevel analyses particularly weak gravitational lensing as well as their interaction with the many other aspects of a survey reduction pipeline.
Equilibrium avalanches in spin glasses ; We study the distribution of equilibrium avalanches shocks in Ising spin glasses which occur at zero temperature upon small changes in the magnetic field. For the infiniterange SherringtonKirkpatrick model we present a detailed derivation of the density rhoDelta M of the magnetization jumps Delta M. It is obtained by introducing a multicomponent generalization of the ParisiDuplantier equation, which allows us to compute all cumulants of the magnetization. We find that rhoDelta M Delta Mtau with an avalanche exponent tau1 for the SK model, originating from the marginal stability criticality of the model. It holds for jumps of size 1 Delta M N12 being provoked by changes of the external field by delta H ON12 where N is the total number of spins. Our general formula also suggests that the density of overlap q between initial and final state in an avalanche is rhoq 11q. These results show interesting similarities with numerical simulations for the outofequilibrium dynamics of the SK model. For finiterange models, using droplet arguments, we obtain the prediction tau df thetadm, where df,dm and theta are the fractal dimension, magnetization exponent and energy exponent of a droplet, respectively. This formula is expected to apply to other glassy disordered systems, such as the randomfield model and pinned interfaces. We make suggestions for further numerical investigations, as well as experimental studies of the Barkhausen noise in spin glasses.
Population physiology leveraging population scale EHR data to understand human endocrine dynamics ; Studying physiology over a broad population for long periods of time is difficult primarily because collecting human physiologic data is intrusive, dangerous, and expensive. Electronic health record EHR data promise to support the development and testing of mechanistic physiologic models on diverse population, but limitations in the data have thus far thwarted such use. For instance, using uncontrolled populationscale EHR data to verify the outcome of time dependent behavior of mechanistic, constructive models can be difficult because i aggregation of the population can obscure or generate a signal, ii there is often no control population, and iii diversity in how the population is measured can make the data difficult to fit into conventional analysis techniques. This paper shows that it is possible to use EHR data to test a physiological model for a population and over long time scales. Specifically, a methodology is developed and demonstrated for testing a mechanistic, timedependent, physiological model of serum glucose dynamics with uncontrolled, populationscale, physiological patient data extracted from an EHR repository. It is shown that there is no observable daily variation the normalized mean glucose for any EHR subpopulations. In contrast, a derived value, daily variation in nonlinear correlation quantified by the timedelayed mutual information TDMI, did reveal the intuitively expected diurnal variation in glucose levels amongst a wild population of humans. Moreover, in a population of intravenously fed patients, there was no observable TDMIbased diurnal signal. These TDMIbased signals, via a glucose insulin model, were then connected with human feeding patterns. In particular, a constructive physiological model was shown to correctly predict the difference between the general uncontrolled population and a subpopulation whose feeding was controlled.
Path integral measure and triangulation independence in discrete gravity ; A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in linearized Regge calculus, which is a discrete model for quantum gravity, appearing in the semiclassical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D linearized Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.
Inverse limits and statistical properties for chaotic implicitly defined economic models ; In this paper we study the dynamics and ergodic theory of certain economic models which are implicitly defined. We consider 1dimensional and 2dimensional overlapping generations models, a cashinadvance model, heterogeneous markets and a cobweb model with adaptive adjustment. We consider the inverse limit spaces of certain chaotic invariant fractal sets and their metric, ergodic and stability properties. The inverse limits give the set of intertemporal perfect foresight equilibria for the economic problem considered. First we show that the inverse limits of these models are stable under perturbations. We prove that the inverse limits are expansive and have specification property. We then employ utility functions on inverse limits in our case. We give two ways to rank such utility functions. First, when perturbing certain dynamical systems, we rank utility functions in terms of their textitaverage values with respect to invariant probability measures on inverse limits, especially with respect to measures of maximal entropy. For families of certain unimodal maps we can adjust both the discount factor and the system parameters in order to obtain maximal average value of the utility. The second way to rank utility functions for more general maps on hyperbolic sets will be to use equilibrium measures of these utility functions on inverse limits; they optimize average values of utility functions while textitat the same time keeping the disorder in the system as low as possible in the long run.
A realistic model of neutrino masses with a large neutrinoless double beta decay rate ; The minimal Standard Model extension with the Weinberg operator does accommodate the observed neutrino masses and mixing, but predicts a neutrinoless double beta 0nubetabeta decay rate proportional to the effective electron neutrino mass, which can be then arbitrarily small within present experimental limits. However, in general 0nubetabeta decay can have an independent origin and be near its present experimental bound; whereas neutrino masses are generated radiatively, contributing negligibly to 0nubetabeta decay. We provide a realization of this scenario in a simple, well defined and testable model, with potential LHC effects and calculable neutrino masses, whose twoloop expression we derive exactly. We also discuss the connection of this model to others that have appeared in the literature, and remark on the significant differences that result from various choices of quantum number assignments and symmetry assumptions. In this type of models lepton flavor violating rates are also preferred to be relatively large, at the reach of foreseen experiments. Interestingly enough, in our model this stands for a large third mixing angle, sin2theta13 gtrsim 0.008, when mu rightarrow eee is required to lie below its present experimental limit.
SUSY Higgs searches beyond the MSSM ; The recent results from the ATLAS and CMS collaborations show that the allowed range for a Standard Model Higgs boson is now restricted to a very thin region. Although those limits are presented exclusively in the framework of the SM, the searches themselves remain sensitive to other Higgs models. We recast the limits within a generic supersymmetric framework that goes beyond the usual minimal extension. Such a generic model can be parameterised through a supersymmetric effective Lagrangian with higher order operators appearing in the Kahler potential and the superpotential, an approach whose first motivation is to alleviate the finetuning problem in supersymmetry with the most dramatic consequence being a substantial increase in the mass of the lightest Higgs boson as compared to the minimal supersymmetic model. We investigate in this paper the constraints set by the LHC on such models. We also investigate how the present picture will change when gathering more luminosity. Issues of how to combine and exploit data from the LHC dedicated to searches for the standard model Higgs to such supersymmetry inspired scenarios are discussed. We also discuss the impact of invisible decays of the Higgs in such scenarios.
Discovering universal statistical laws of complex networks ; Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in realworld networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to realworld networks. We address this issue by comparing different classical and more recently developed network models with respect to their generalisation power, which we identify with large structural variability and absence of constraints imposed by the construction scheme. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not modelrelated dependencies between different network characteristics do exist. This allows, for instance, to infer global features from local ones using regression models trained on networks with high generalisation power. Our results confirm and extend previous findings regarding the synchronisation properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of undersampled networks with good approximation. Finally, we demonstrate on three different data sets C. elegans' neuronal network, R. prowazekii's metabolic network, and a network of synonyms extracted from Roget's Thesaurus that realworld networks have statistical relations compatible with those obtained using regression models.
Effective dark energy equation of state in interacting dark energy models ; In models where dark matter and dark energy interact nonminimally, the total amount of matter in a fixed comoving volume may vary from the time of recombination to the present time due to energy transfer between the two components. This implies that, in interacting dark energy models, the fractional matter density estimated using the cosmic microwave background assuming no interaction between dark matter and dark energy will in general be shifted with respect to its true value. This may result in an incorrect determination of the equation of state of dark energy if the interaction between dark matter and dark energy is not properly accounted for, even if the evolution of the Hubble parameter as a function of redshift is known with arbitrary precision. In this paper we find an exact expression, as well as a simple analytical approximation, for the evolution of the effective equation of state of dark energy, assuming that the energy transfer rate between dark matter and dark energy is described by a simple twoparameter model. We also provide analytical examples where nonphantom interacting dark energy models mimic the background evolution and primary cosmic microwave background anisotropies of phantom dark energy models.
WeylCartanWeitzenbock gravity as a generalization of teleparallel gravity ; We consider a gravitational model in a WeylCartan spacetime, in which the Weitzenbock condition of the vanishing of the sum of the curvature and torsion scalar is also imposed. Moreover, a kinetic term for the torsion is also included in the gravitational action. The field equations of the model are obtained from a HilbertEinstein type variational principle, and they lead to a complete description of the gravitational field in terms of two fields, the Weyl vector and the torsion, respectively, defined in a curved background. The cosmological applications of the model are investigated for a particular choice of the free parameters in which the torsion vector is proportional to the Weyl vector. Depending on the numerical values of the parameters of the cosmological model, a large variety of dynamic evolutions can be obtained, ranging from inflationaryaccelerated expansions to noninflationary behaviors. In particular we show that a de Sitter type late time evolution can be naturally obtained from the field equations of the model. Therefore the present model leads to the possibility of a purely geometrical description of the dark energy, in which the late time acceleration of the Universe is determined by the intrinsic geometry of the spacetime.
On the critical nature of plastic flow one and two dimensional models ; Steady state plastic flows have been compared to developed turbulence because the two phenomena share the inherent complexity of particle trajectories, the scale free spatial patterns and the power law statistics of fluctuations. The origin of the apparently chaotic and at the same time highly correlated microscopic response in plasticity remains hidden behind conventional engineering models which are based on smooth fitting functions. To regain access to fluctuations, we study in this paper a minimal mesoscopic model whose goal is to elucidate the origin of scale free behavior in plasticity. We limit our description to fcc type crystals and leave out both temperature and rate effects. We provide simple illustrations of the fact that complexity in rate independent athermal plastic flows is due to marginal stability of the underlying elastic system. Our conclusions are based on a reduction of an overdamped viscoelasticity problem for a system with a rugged elastic energy landscape to an integer valued automaton. We start with an overdamped one dimensional model and show that it reproduces the main macroscopic phenomenology of rate independent plastic behavior but falls short of generating self similar structure of fluctuations. We then provide evidence that a two dimensional model is already adequate for describing power law statistics of avalanches and fractal character of dislocation patterning. In addition to capturing experimentally measured critical exponents, the proposed minimal model shows finite size scaling collapse and generates realistic shape functions in the scaling laws.
Estimating strength of DDoS attack using various regression models ; Anomalybased DDoS detection systems construct profile of the traffic normally seen in the network, and identify anomalies whenever traffic deviate from normal profile beyond a threshold. This extend of deviation is normally not utilised. This paper reports the evaluation results of proposed approach that utilises this extend of deviation from detection threshold to estimate strength of DDoS attack using various regression models. A relationship is established between number of zombies and observed deviation in sample entropy. Various statistical performance measures, such as coefficient of determination R2, coefficient of correlation CC, sum of square error SSE, mean square error MSE, root mean square error RMSE, normalised mean square error NMSE, NashSutcliffe efficiency index eta and mean absolute error MAE are used to measure the performance of various regression models. Internet type topologies used for simulation are generated using transitstub model of GTITM topology generator. NS2 network simulator on Linux platform is used as simulation test bed for launching DDoS attacks with varied attack strength. A comparative study is performed using different regression models for estimating strength of DDoS attack. The simulation results are promising as we are able to estimate strength of DDoS attack efficiently with very less error rate using various regression models.
Revisit of the Interaction between Holographic Dark Energy and Dark Matter ; In this paper we investigate the possible direct, nongravitational interaction between holographic dark energy HDE and dark matter. Firstly, we start with two simple models with the interaction terms Q propto rhodm and Q propto rhode, and then we move on to the general form Q propto rhomalpharhodebeta. The cosmological constraints of the models are obtained from the joint analysis of the present Union2.1BAOCMBH0 data. We find that the data slightly favor an energy flow from dark matter to dark energy, although the original HDE model still lies in the 95.4 confidence level CL region. For all models we find c1 at the 95.4 CL. We show that compared with the cosmic expansion, the effect of interaction on the evolution of rhodm and rhode is smaller, and the relative increment decrement amount of the energy in the dark matter component is constrained to be less than 9 15 at the 95.4 CL. By introducing the interaction, we find that even when c1 the big rip still can be avoided due to the existence of a de Sitter solution at zrightarrow1. We show that this solution can not be accomplished in the two simple models, while for the general model such a solution can be achieved with a large beta, and the big rip may be avoided at the 95.4 CL.
PhD Thesis Topics in SUSY Phenomenology at the LHC ; This dissertation focuses on phenomenological studies for possible signals for supersymmetric events at the Large Hadron Collider LHC. We have divided our endeavours into three separate projects. First, we consider SUSY models where the gluino production at the LHC should be rich in top and bottom quark jets. Requiring bjets in addition to missing energy eslt should, therefore, enhance the supersymmetry signal relative to Standard Model backgrounds. We quantify the increase in the supersymmetry reach of the LHC from btagging in a variety of wellmotivated models of supersymmetry. We also explore toptagging at the LHC. Second, we explore the prospects for detecting the direct production of third generation squarks in models with an inverted squark mass hierarchy. This is signalled by bjets eslt events harder than in the Standard Model, but softer than those from the production of gluinos and heavier squarks. We find that these events can be readily separated from SM background for third generation squark masses in the 200400 GeV range, and the contamination from the much heavier gluinos and squarks although formidable can effectively be suppressed. Third, we attempt to extract modelindependent information about neutralino properties from LHC data. assuming only the particle content of the MSSM and that all twobody neutralino decays are kinematically suppressed, with the neutralino inclusive production yielding a sufficient cross section. We show that the Lorentz invariant dilepton mass distribution encodes clear information about the relative sign of the mass eigenvalues of the parent and daughter neutralinos. We show that we can extract most neutralino mass matrix parameters if there is a double mass edge.
Feature Screening via Distance Correlation Learning ; This paper is concerned with screening features in ultrahigh dimensional data analysis, which has become increasingly important in diverse scientific fields. We develop a sure independence screening procedure based on the distance correlation DCSIS, for short. The DCSIS can be implemented as easily as the sure independence screening procedure based on the Pearson correlation SIS, for short proposed by Fan and Lv 2008. However, the DCSIS can significantly improve the SIS. Fan and Lv 2008 established the sure screening property for the SIS based on linear models, but the sure screening property is valid for the DCSIS under more general settings including linear models. Furthermore, the implementation of the DCSIS does not require model specification e.g., linear model or generalized linear model for responses or predictors. This is a very appealing property in ultrahigh dimensional data analysis. Moreover, the DCSIS can be used directly to screen grouped predictor variables and for multivariate response variables. We establish the sure screening property for the DCSIS, and conduct simulations to examine its finite sample performance. Numerical comparison indicates that the DCSIS performs much better than the SIS in various models. We also illustrate the DCSIS through a real data example.
Renormalization group approach to matrix models via noncommutative space ; We develop a new renormalization group approach to the largeN limit of matrix models. It has been proposed that a procedure, in which a matrix model of size N1 times N1 is obtained by integrating out one row and column of an N times N matrix model, can be regarded as a renormalization group and that its fixed point reveals critical behavior in the largeN limit. We instead utilize the fuzzy sphere structure based on which we construct a new map renormalization group from N times N matrix model to that of rank N1. Our renormalization group has great advantage of being a nice analog of the standard renormalization group in field theory. It is naturally endowed with the concept of highlow energy, and consequently it is in a sense local and admits derivative expansions in the space of matrices. In construction we also find that our renormalization in general generates multitrace operators, and that nonplanar diagrams yield a nonlocal operation on a matrix, whose action is to transport the matrix to the antipode on the sphere. Furthermore the noncommutativity of the fuzzy sphere is renormalized in our formalism. We then analyze our renormalization group equation, and Gaussian and nontrivial fixed points are found. We further clarify how to read off scaling dimensions from our renormalization group equation. Finally the critical exponent of the model of twodimensional gravity based on our formalism is examined.
The Higgs Sector and FineTuning in the pMSSM ; The recent discovery of a 125 GeV Higgs, as well as the lack of any positive findings in searches for supersymmetry, has renewed interest in both the supersymmetric Higgs sector and finetuning. Here, we continue our study of the phenomenological MSSM pMSSM, discussing the light Higgs and finetuning within the context of two sets of previously generated pMSSM models. We find an abundance of models with experimentallyfavored Higgs masses and couplings. We investigate the decay modes of the light Higgs in these models, finding strong correlations between many final states. We then examine the degree of finetuning, considering contributions from each of the pMSSM parameters at up to nexttoleadinglog order. In particular, we examine the finetuning implications for our model sets that arise from the discovery of a 125 GeV Higgs. Finally, we investigate a small subset of models with low finetuning and a light Higgs near 125 GeV, describing the common features of such models. We generically find a light stop and bottom with complex decay patterns into a set of light electroweak gauginos, which will make their discovery more challenging and may require novel search techniques.
Bayesian approach to gravitational lens model selection constraining H0 with a selected sample of strong lenses ; Bayesian model selection methods provide a selfconsistent probabilistic framework to test the validity of competing scenarios given a set of data. We present a case study application to strong gravitational lens parametric models. Our goal is to select a homogeneous lens subsample suitable for cosmological parameter inference. To this end we apply a Bayes factor analysis to a synthetic catalog of 500 lenses with powerlaw potential and external shear. For simplicity we focus on doubleimage lenses the largest fraction of lens in the simulated sample and select a subsample for which astrometry and timedelays provide strong evidence for a simple powerlaw model description. Through a likelihood analysis we recover the input value of the Hubble constant to within 3sigma statistical uncertainty. We apply this methodology to a sample of double image lensed quasars. In the case of B1600434, SBS 1520530 and SDSS J16504251 the Bayes' factor analysis favors a simple powerlaw model description with high statistical significance. Assuming a flat LambdaCDM cosmology, the combined likelihood data analysis of such systems gives the Hubble constant H076155 kmsMpc having marginalized over the lens model parameters, the cosmic matter density and consistently propagated the observational errors on the angular position of the images. The next generation of cosmic structure surveys will provide larger lens datasets and the method described here can be particularly useful to select homogeneous lens subsamples adapted to perform unbiased cosmological parameter inference
Private Higgs at the LHC ; We study the LHC phenomenology of a general class of Private Higgs PH models, in which fermions obtain their masses from their own Higgs doublets with op1 Yukawa couplings, and the mass hierarchy is translated into a dynamical chain of vacuum expectation values. This is accomplished by introducing a number of light gaugesinglet scalars, the darkons, some of which could play the role of dark matter. These models allow for substantial modifications to the decays of the lightest Higgs boson, for instance through mixing with TeVscale PH fields and light darkons the simplest version of the model predicts the ratios of partial widths to satisfy Gh to VVtextPHGh to VVtextSM approx Gh to ggtextPHGh to ggtextSM leq 1 and Gh to bbar btextPHGh to bbar btextSM sim op1, where the inequalities are saturated only in the absence of Higgs mixing with light darkons. An extension of the model proposed previously for generating nonzero neutrino masses can also contribute substantially to h to gg without violating electroweak precision constraints. If the Higgs coupling to fermions is found to deviate from the Standard Model SM expectation, then the PH model may be a viable candidate for extending the SM.
Composite magnetic dark matter and the 130 GeV line ; We propose an economical model to explain the apparent 130 GeV gamma ray peak, found in the FermiLAT data, in terms of dark matter annihilation through a dipole moment interaction. The annihilating dark matter particles represent a subdominant component, with mass density 717 of the total DM density; and they only annihilate into gamma gamma, gamma Z, and ZZ, through a magnetic or electric dipole moment. Annihilation into other standard model particles is suppressed, due to a mass splitting in the magnetic dipole case, or to pwave scattering in the electric dipole case. In either case, the observed signal requires a dipole moment of strength mu 2TeV. We argue that composite models are the preferred means of generating such a large dipole moment, and that the magnetic case is more natural than the electric one. We present a simple model involving a scalar and fermionic techniquark of a confining SU2 gauge symmetry. We point out some generic challenges for getting such a model to work. The new physics leading to a sufficiently large dipole moment is below the TeV scale, indicating that the magnetic moment is not a valid effective operator for LHC physics, and that production of the strongly interacting constituents, followed by technihadronization, is a more likely signature than monophoton events. In particular, 4photon events from the decays of bound state pairs are predicted.
Interplay Between Chaotic and Regular Motion in a TimeDependent Barred Galaxy Model ; We study the distinction and quantification of chaotic and regular motion in a timedependent Hamiltonian barred galaxy model. Recently, a strong correlation was found between the strength of the bar and the presence of chaotic motion in this system, as models with relatively strong bars were shown to exhibit stronger chaotic behavior compared to those having a weaker bar component. Here, we attempt to further explore this connection by studying the interplay between chaotic and regular behavior of star orbits when the parameters of the model evolve in time. This happens for example when one introduces linear time dependence in the mass parameters of the model to mimic, in some general sense, the effect of selfconsistent interactions of the actual Nbody problem. We thus observe, in this simple timedependent model also, that the increase of the bar's mass leads to an increase of the system's chaoticity. We propose a new way of using the Generalized Alignment Index GALI method as a reliable criterion to estimate the relative fraction of chaotic vs. regular orbits in such timedependent potentials, which proves to be much more efficient than the computation of Lyapunov exponents. In particular, GALI is able to capture subtle changes in the nature of an orbit or ensemble of orbits even for relatively small time intervals, which makes it ideal for detecting dynamical transitions in timedependent systems.
Clustering hidden Markov models with variational HEM ; The hidden Markov model HMM is a widelyused generative model that copes with sequential data, assuming that each observation is conditioned on the state of a hidden Markov chain. In this paper, we derive a novel algorithm to cluster HMMs based on the hierarchical EM HEM algorithm. The proposed algorithm i clusters a given collection of HMMs into groups of HMMs that are similar, in terms of the distributions they represent, and ii characterizes each group by a cluster center, i.e., a novel HMM that is representative for the group, in a manner that is consistent with the underlying generative model of the HMM. To cope with intractable inference in the Estep, the HEM algorithm is formulated as a variational optimization problem, and efficiently solved for the HMM case by leveraging an appropriate variational approximation. The benefits of the proposed algorithm, which we call variational HEM VHEM, are demonstrated on several tasks involving timeseries data, such as hierarchical clustering of motion capture sequences, and automatic annotation and retrieval of music and of online handwriting data, showing improvements over current methods. In particular, our variational HEM algorithm effectively leverages large amounts of data when learning annotation models by using an efficient hierarchical estimation procedure, which reduces learning times and memory requirements, while improving model robustness through better regularization.
Bayesian sandwich posteriors for pseudotrue parameters ; Under model misspecification, the MLE generally converges to the pseudotrue parameter, the parameter corresponding to the distribution within the model that is closest to the distribution from which the data are sampled. In many problems, the pseudotrue parameter corresponds to a population parameter of interest, and so a misspecified model can provide consistent estimation for this parameter. Furthermore, the wellknown sandwich variance formula of Huber1967 provides an asymptotically accurate sampling distribution for the MLE, even under model misspecification. However, confidence intervals based on a sandwich variance estimate may behave poorly for low sample sizes, partly due to the use of a plugin estimate of the variance. From a Bayesian perspective, plugin estimates of nuisance parameters generally underrepresent uncertainty in the unknown parameters, and averaging over such parameters is expected to give better performance. With this in mind, we present a Bayesian sandwich posterior distribution, whose likelihood is based on the sandwich sampling distribution of the MLE. This Bayesian approach allows for the incorporation of prior information about the parameter of interest, averages over uncertainty in the nuisance parameter and is asymptotically robust to model misspecification. In a small simulation study on estimating a regression parameter under heteroscedasticity, the addition of accurate prior information and the averaging over the nuisance parameter are both seen to improve the accuracy and calibration of confidence intervals for the parameter of interest.
The Collisional Evolution of Debris Disks ; We explore the collisional decay of disk mass and infrared emission in debris disks. With models, we show that the rate of the decay varies throughout the evolution of the disks, increasing its rate up to a certain point, which is followed by a leveling off to a slower value. The total disk mass falls off t0.35 at its fastest point where t is time for our reference model, while the dust mass and its proxy the infrared excess emission fades significantly faster t0.8. These later level off to a decay rate of Mtott t0.08 and Mdustt or Lirt t0.6. This is slower than the t1 decay given for all three system parameters by traditional analytic models. We also compile an extensive catalog of Spitzer and Herschel 24, 70, and 100 micron observations. Assuming a lognormal distribution of initial disk masses, we generate model population decay curves for the fraction of debris disk harboring stars observed at 24 micron and also model the distribution of measured excesses at the farIR wavelengths 70100 micron at certain age regimes. We show general agreement at 24 micron between the decay of our numerical collisional population synthesis model and observations up to a Gyr. We associate offsets above a Gyr to stochastic events in a few select systems. We cannot fit the decay in the far infrared convincingly with grain strength properties appropriate for silicates, but those of water ice give fits more consistent with the observations.
FeynmanKac particle integration with geometric interacting jumps ; This article is concerned with the design and analysis of discrete time FeynmanKac particle integration models with geometric interacting jump processes. We analyze two general types of model, corresponding to whether the reference process is in continuous or discrete time. For the former, we consider discrete generation particle models defined by arbitrarily fine time mesh approximations of the FeynmanKac models with continuous time path integrals. For the latter, we assume that the discrete process is observed at integer times and we design new approximation models with geometric interacting jumps in terms of a sequence of intermediate time steps between the integers. In both situations, we provide non asymptotic bias and variance theorems w.r.t. the time step and the size of the system, yielding what appear to be the first results of this type for this class of FeynmanKac particle integration models. We also discuss uniform convergence estimates w.r.t. the time horizon. Our approach is based on an original semigroup analysis with first order decompositions of the fluctuation errors.
On the accuracy of the Perturbative Approach for Strong Lensing Local Distortion for PseudoElliptical Models ; The Perturbative Approach PA introduced by citetalard07 provides analytic solutions for gravitational arcs by solving the lens equation linearized around the Einstein ring solution. This is a powerful method for lens inversion and simulations in that it can be used, in principle, for generic lens models. In this paper we aim to quantify the domain of validity of this method for three quantities derived from the linearized mapping caustics, critical curves, and the deformation cross section i.e. the arc cross section in the infinitesimal circular source approximation. We consider lens models with elliptical potentials, in particular the Singular Isothermal Elliptic Potential and PseudoElliptical NavarroFrenkWhite models. We show that the PA is exact for this first model. For the second, we obtain constraints on the model parameter space given by the potential ellipticity parameter varepsilon and characteristic convergence kappas such that the PA is accurate for the aforementioned quantities. In this process we obtain analytic expressions for several lensing functions, which are valid for the PA in general. The determination of this domain of validity could have significant implications for the use of the PA, but it still needs to be probed with extended sources.
Radiating Gravitational Collapse with Shearing Motion and Bulk Viscosity Revisited ; A new model is proposed to a collapsing star consisting of an anisotropic fluid with bulk viscosity, radial heat flow and outgoing radiation. In a previous paper one of us has introduced a function time dependent into the grr, besides the time dependent metric functions gthetatheta and gphiphi. The aim of this work is to generalize this previous model by introducing bulk viscosity and compare it to the nonviscous collapse. The behavior of the density, pressure, mass, luminosity and the effective adiabatic index is analyzed. Our work is also compared to the case of a collapsing fluid with bulk viscosity of another previous model, for a star with 6 Modot. The pressure of the star, at the beginning of the collapse, is isotropic but due to the presence of the bulk viscosity the pressure becomes more and more anisotropic. The black hole is never formed because the apparent horizon formation condition is never satisfied, in contrast of the previous model where a black hole is formed. An observer at infinity sees a radial point source radiating exponentially until reaches the time of maximum luminosity and suddenly the star turns off. In contrast of the former model where the luminosity also increases exponentially, reaching a maximum and after it decreases until the formation of the black hole. The effective adiabatic index diminishes due to the bulk viscosity, thus increasing the instability of the system, in both models, in the former paper and in this work.
Nonparametric Bayesian modelling of digital gene expression data ; Nextgeneration sequencing technologies provide a revolutionary tool for generating gene expression data. Starting with a fixed RNA sample, they construct a library of millions of differentially abundant short sequence tags or reads, which constitute a fundamentally discrete measure of the level of gene expression. A common limitation in experiments using these technologies is the low number or even absence of biological replicates, which complicates the statistical analysis of digital gene expression data. Analysis of this type of data has often been based on modified tests originally devised for analysing microarrays; both these and even de novo methods for the analysis of RNAseq data are plagued by the common problem of low replication. We propose a novel, nonparametric Bayesian approach for the analysis of digital gene expression data. We begin with a hierarchical model for modelling overdispersed count data and a blocked Gibbs sampling algorithm for inferring the posterior distribution of model parameters conditional on these counts. The algorithm compensates for the problem of low numbers of biological replicates by clustering together genes with tag counts that are likely sampled from a common distribution and using this augmented sample for estimating the parameters of this distribution. The number of clusters is not decided a priori, but it is inferred along with the remaining model parameters. We demonstrate the ability of this approach to model biological data with high fidelity by applying the algorithm on a public dataset obtained from cancerous and noncancerous neural tissues.
Energy and potential enstrophy flux constraints in quasigeostrophic models ; We investigate an inequality constraining the energy and potential enstrophy flux spectra in twolayer and multilayer quasigeostrophic models. Its physical significance is that it can diagnose whether any given multilayer model that allows coexisting downscale cascades of energy and potential enstrophy can allow the downscale energy flux to become large enough to yield a mixed energy spectrum where the dominant k3 scaling is overtaken by a subdominant k53 contribution beyond a transition wavenumber kt situated in the inertial range. The validity of the flux inequality implies that this scaling transition cannot occur within the inertial range, whereas a violation of the flux inequality beyond some wavenumber kt implies the existence of a scaling transition near that wavenumber. This flux inequality holds unconditionally in twodimensional NavierStokes turbulence, however, it is far from obvious that it continues to hold in multilayer quasigeostrophic models, because the dissipation rate spectra for energy and potential enstrophy no longer relate in a trivial way, as in twodimensional NavierStokes. We derive the general form of the energy and potential enstrophy dissipation rate spectra for a generalized symmetrically coupled multilayer model. From this result, we prove that in a symmetrically coupled multilayer quasigeostrophic model, where the dissipation terms for each layer consist of the same Fourierdiagonal linear operator applied on the streamfunction field of only the same layer, the flux inequality continues to hold. It follows that a necessary condition to violate the flux inequality is the use of asymmetric dissipation where different operators are used on different layers. etc.
Firstprinciples model potentials for latticedynamical studies general methodology and example of application to ferroic perovskite oxides ; We present a scheme to construct model potentials, with parameters computed from first principles, for largescale latticedynamical simulations of materials. Our method mimics the traditional solidstate approach to the investigation of vibrational spectra, i.e., we start from a suitably chosen reference configuration of the material and describe its energy as a function of arbitrary atomic distortions by means of a Taylor series. Such a form of the potentialenergy surface is completely general, trivial to formulate for any compound, and physically transparent. Further, the approximations involved in our effective models are clearcut, and the precision can be improved in a systematic and welldefined fashion. Moreover, such a simple definition allows for a straightforward determination of the parameters in the loworder terms of the series, as they are the direct result of densityfunctionalperturbationtheory calculations, which greatly simplifies the model construction. Here we present such a scheme, discuss a practical and versatile methodology for the calculation of the model parameters from first principles, and describe our results for two challenging cases in which the model potential is strongly anharmonic, namely, ferroic perovskite oxides PbTiO3 and SrTiO3. The choice of test materials was partly motivated by historical reasons, since our scheme can be viewed as a natural extension of and was initially inspired by the socalled firstprinciples effective Hamiltonian approach to the investigation of temperaturedriven effects in ferroelectric perovskite oxides. Thus, the study of these compounds allows us to better describe the connections between the effectiveHamiltonian method and ours.
Robustness of predatorprey models for confinement regime transitions in fusion plasmas ; Energy transport and confinement in tokamak fusion plasmas is usually determined by the coupled nonlinear interactions of smallscale drift turbulence and larger scale coherent nonlinear structures, such as zonal flows, together with free energy sources such as temperature gradients. Zerodimensional models, designed to embody plausible physical narratives for these interactions, can help identify the origin of enhanced energy confinement and of transitions between confinement regimes. A prime zerodimensional paradigm is predatorprey or LotkaVolterra. Here we extend a successful threevariable temperature gradient; microturbulence level; one class of coherent structure model in this genre M A Malkov and P H Diamond, Phys. Plasmas 16, 012504 2009, by adding a fourth variable representing a second class of coherent structure. This requires a fourth coupled nonlinear ordinary differential equation. We investigate the degree of invariance of the phenomenology generated by the model of Malkov and Diamond, given this additional physics. We study and compare the longtime behaviour of the threeequation and fourequation systems, their evolution towards the final state, and their attractive fixed points and limit cycles. We explore the sensitivity of paths to attractors. It is found that, for example, an attractive fixed point of the threeequation system can become a limit cycle of the fourequation system. Addressing these questions which we together refer to as robustness for convenience, is particularly important for models which, as here, generate sharp transitions in the values of system variables which may replicate some key features of confinement transitions. Our results help establish the robustness of the zerodimensional model approach to capturing observed confinement phenomenology in tokamak fusion plasmas.
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics ; Statistical postprocessing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a postprocessing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics EMOS. We model precipitation amounts by a generalized extreme value distribution that is leftcensored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18h forecasts of 6h accumulated precipitation over Germany in 2011 using the COSMODE ensemble prediction system operated by the German Meteorological Service. It yields calibrated and sharp predictive distributions and compares favourably with extended logistic regression and Bayesian model averaging which are state of the art approaches for precipitation postprocessing. The incorporation of neighbourhood information further improves predictive performance and turns out to be a useful strategy to account for displacement errors of the dynamical forecasts in a probabilistic forecasting framework.
KitaevHeisenberg models for iridates on the triangular, hyperkagome, kagome, fcc, and pyrochlore lattices ; The KitaevHeisenberg KH model has been proposed to capture magnetic interactions in iridate Mott insulators on the honeycomb lattice. We show that analogous interactions arise in many other geometries built from edgesharing IrO6 octahedra, including the pyrochlore and hyperkagome lattices relevant to Ir2O4 and Na4Ir3O8 respectively. The Kitaev spin liquid exact solution does not generalize to these lattices. However, a different exactly soluble point of the honeycomb lattice KH model, obtained by a foursublattice transformation to a ferromagnet, generalizes to all these lattices. A Klein fourgroup Z2xZ2 structure is associated with this mapping hence Klein duality. A finite lattice admits the duality if a simple geometrical condition is met. This duality predicts fluctuation free ordered states on these different 2D and 3D lattices, which are analogs of the honeycomb lattice KH stripy order. This result is used in conjunction with a semiclassical LuttingerTisza approximation to obtain phase diagrams for KH models on the different lattices. We also discuss a Majorana fermion based mean field theory at the Kitaev point, which is exact on the honeycomb lattice, for the KH models on the different lattices. We attribute the rich behavior of these models to the interplay of geometric frustration and frustration induced by spinorbit coupling.
Robust and Trend Following Student's t Kalman Smoothers ; We present a Kalman smoothing framework based on modeling errors using the heavy tailed Student's t distribution, along with algorithms, convergence theory, opensource general implementation, and several important applications. The computational effort per iteration grows linearly with the length of the time series, and all smoothers allow nonlinear process and measurement models. Robust smoothers form an important subclass of smoothers within this framework. These smoothers work in situations where measurements are highly contaminated by noise or include data unexplained by the forward model. Highly robust smoothers are developed by modeling measurement errors using the Student's t distribution, and outperform the recently proposed L1Laplace smoother in extreme situations with data containing 20 or more outliers. A second special application we consider in detail allows tracking sudden changes in the state. It is developed by modeling process noise using the Student's t distribution, and the resulting smoother can track sudden changes in the state. These features can be used separately or in tandem, and we present a general smoother algorithm and open source implementation, together with convergence analysis that covers a wide range of smoothers. A key ingredient of our approach is a technique to deal with the nonconvexity of the Student's t loss function. Numerical results for linear and nonlinear models illustrate the performance of the new smoothers for robust and tracking applications, as well as for mixed problems that have both types of features.
The simplest model of galaxy formation I A formation history model of galaxy stellar mass growth ; We introduce a simple model to selfconsistently connect the growth of galaxies to the formation history of their host dark matter haloes. Our model is defined by two simple functions the baryonic growth function which controls the rate at which new baryonic material is made available for star formation, and the physics function which controls the efficiency with which this material is converted into stars. Using simple, phenomenologically motivated forms for both functions that depend only on a single halo property, we demonstrate the model's ability to reproduce the z0 red and blue stellar mass functions. Furthermore, by adding redshift as a second input variable to the physics function we show that the reproduction of the global stellar mass function out to z3 is improved. We conclude by discussing the general utility of our new model, highlighting its usefulness for creating mock galaxy samples which have a number of key advantages over those generated by other techniques.
On ModelBased RIP1 Matrices ; The Restricted Isometry Property RIP is a fundamental property of a matrix enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k in the lp norm if Axp approx xp for any vector x that is ksparse, i.e., that has at most k nonzeros. The minimal number of rows m necessary for the property to hold has been extensively investigated, and tight bounds are known. Motivated by signal processing models, a recent work of Baraniuk et al has generalized this notion to the case where the support of x must belong to a given model, i.e., a given family of supports. This more general notion is much less understood, especially for norms other than l2. In this paper we present tight bounds for the modelbased RIP property in the l1 norm. Our bounds hold for the two most frequently investigated models treesparsity and blocksparsity. We also show implications of our results to sparse recovery problems.
Variable selection for general index models via sliced inverse regression ; Variable selection, also known as feature selection in machine learning, plays an important role in modeling high dimensional data and is key to datadriven scientific discoveries. We consider here the problem of detecting influential variables under the general index model, in which the response is dependent of predictors through an unknown function of one or more linear combinations of them. Instead of building a predictive model of the response given combinations of predictors, we model the conditional distribution of predictors given the response. This inverse modeling perspective motivates us to propose a stepwise procedure based on likelihoodratio tests, which is effective and computationally efficient in identifying important variables without specifying a parametric relationship between predictors and the response. For example, the proposed procedure is able to detect variables with pairwise, threeway or even higherorder interactions among p predictors with a computational time of Op instead of Opk with k being the highest order of interactions. Its excellent empirical performance in comparison with existing methods is demonstrated through simulation studies as well as real data examples. Consistency of the variable selection procedure when both the number of predictors and the sample size go to infinity is established.
Ovarian volume throughout life a validated normative model ; The measurement of ovarian volume has been shown to be a useful indirect indicator of the ovarian reserve in women of reproductive age, in the diagnosis and management of a number of disorders of puberty and adult reproductive function, and is under investigation as a screening tool for ovarian cancer. To date there is no normative model of ovarian volume throughout life. By searching the published literature for ovarian volume in healthy females, and using our own data from multiple sources combined n 59,994 we have generated and robustly validated the first model of ovarian volume from conception to 82 years of age. This model shows that 69 of the variation in ovarian volume is due to age alone. We have shown that in the average case ovarian volume rises from 0.7 mL 95 CI 0.4 1.1 mL at 2 years of age to a peak of 7.7 mL 95 CI 6.5 9.2 mL at 20 years of age with a subsequent decline to about 2.8mL 95 CI 2.7 2.9 mL at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis and management of a number of diverse gynaecological and reproductive conditions in females from birth to menopause and beyond.
Bayesian Functional Generalized Additive Models with Sparsely Observed Covariates ; The functional generalized additive model FGAM was recently proposed in McLean et al. 2013 as a more flexible alternative to the common functional linear model FLM for regressing a scalar on functional covariates. In this paper, we develop a Bayesian version of FGAM for the case of Gaussian errors with identity link function. Our approach allows the functional covariates to be sparsely observed and measured with error, whereas the estimation procedure of McLean et al. 2013 required that they be noiselessly observed on a regular grid. We consider both Monte Carlo and variational Bayes methods for fitting the FGAM with sparsely observed covariates. Due to the complicated form of the model posterior distribution and full conditional distributions, standard Monte Carlo and variational Bayes algorithms cannot be used. The strategies we use to handle the updating of parameters without closedform full conditionals should be of independent interest to applied Bayesian statisticians working with nonconjugate models. Our numerical studies demonstrate the benefits of our algorithms over a twostep approach of first recovering the complete trajectories using standard techniques and then fitting a functional regression model. In a real data analysis, our methods are applied to forecasting closing price for items up for auction on the online auction website eBay.
A small cosmological constant due to nonperturbative quantum effects ; We propose that the expectation value of the stress energy tensor of the Standard Model should be given by Tmu nu rhovac etamunu, with a vacuum energy rhovac that differs from the usual dimensional analysis result by an exponentially small factor associated with nonperturbative effects. We substantiate our proposal by a rigorous analysis of a toy model, namely the 2dimensional GrossNeveu model. In particular, we address, within this model, the key question of the renormalization ambiguities affecting the calculation. The stress energy operator is constructed concretely via the operatorproductexpansion. The nonperturbative factor in the vacuum energy is seen as a consequence of the facts that a the OPEcoefficients have an analytic dependence on g, b the vacuum correlations have a nonanalytic nonperturbative dependence on g, which we propose to be a generic feature of QFT. Extrapolating our result from the GrossNeveu model to the Standard Model, one would expect to find rhovac Lambda4 eO1g2, where Lambda is an energy scale such as Lambda MH, and g is a gauge coupling such as g24pi alphaEW. The exponentially small factor due to nonperturbative effects could explain the unnatural smallness of this quantity.
Fast inference in generalized linear models via expected loglikelihoods ; Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact loglikelihood by an expectation over the model covariates; the resulting expected loglikelihood can in many cases be computed significantly faster than the exact loglikelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected loglikelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected loglikelihood or a penalized version thereof can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy and in some cases even improved accuracy compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationallychallenging dataset of neural spike trains obtained via largescale multielectrode recordings in the primate retina.
Quantum canonical tensor model and an exact wave function ; Tensor models in various forms are being studied as models of quantum gravity. Among them the canonical tensor model has a canonical pair of rankthree tensors as dynamical variables, and is a pure constraint system with firstclass constraints. The Poisson algebra of the firstclass constraints has structure functions, and provides an algebraically consistent way of discretizing the Dirac firstclass constraint algebra for general relativity. This paper successfully formulates the WheelerDeWitt scheme of quantization of the canonical tensor model; the ordering of operators in the constraints is determined without ambiguity by imposing Hermiticity and covariance on the constraints, and the commutation algebra of constraints takes essentially the same from as the classical Poisson algebra, i.e. is firstclass. Thus one could consistently obtain, at least locally in the configuration space, wave functions of universe by solving the partial differential equations representing the constraints, i.e. the WheelerDeWitt equations for the quantum canonical tensor model. The unique wave function for the simplest nontrivial case is exactly and globally obtained. Although this case is far from being realistic, the wave function has a few physically interesting features; it shows that locality is favored, and that there exists a locus of configurations with features of beginning of universe.
Geospatial Narratives and their SpatioTemporal Dynamics Commonsense Reasoning for Highlevel Analyses in Geographic Information Systems ; The modelling, analysis, and visualisation of dynamic geospatial phenomena has been identified as a key developmental challenge for nextgeneration Geographic Information Systems GIS. In this context, the envisaged paradigmatic extensions to contemporary foundational GIS technology raises fundamental questions concerning the ontological, formal representational, and analytical computational methods that would underlie their spatial information theoretic underpinnings. We present the conceptual overview and architecture for the development of highlevel semantic and qualitative analytical capabilities for dynamic geospatial domains. Building on formal methods in the areas of commonsense reasoning, qualitative reasoning, spatial and temporal representation and reasoning, reasoning about actions and change, and computational models of narrative, we identify concrete theoretical and practical challenges that accrue in the context of formal reasoning about space, events, actions, and change'. With this as a basis, and within the backdrop of an illustrated scenario involving the spatiotemporal dynamics of urban narratives, we address specific problems and solutions techniques chiefly involving qualitative abstraction', data integration and spatial consistency', and practical geospatial abduction'. From a broad topical viewpoint, we propose that nextgeneration dynamic GIS technology demands a transdisciplinary scientific perspective that brings together Geography, Artificial Intelligence, and Cognitive Science. Keywords artificial intelligence; cognitive systems; humancomputer interaction; geographic information systems; spatiotemporal dynamics; computational models of narrative; geospatial analysis; geospatial modelling; ontology; qualitative spatial modelling and reasoning; spatial assistance systems
Large Eddy Simulation, Turbulent Transport And The Renormalization Group ; In large eddy simulations, the Reynolds averages of nonlinear terms are not directly computable in terms of the resolved variables and require a closure hypothesis or model, known as a subgrid scale term. Inspired by the renormalization group RNG,we introduce an expansion for the unclosed terms, carried out explicitly to all orders. In leading order, this expansion defines subgrid scale unclosed terms, which we relate to the dynamic subgrid scale closure models. The expansion, which generalizes the Leonard stress for closure analysis, suggests a systematic higher order determination of the model coefficients. The RNG point of view sheds light on the nonuniqueness of the infinite Reynolds number limit. For the mixing of N species, we see an N1 parameter family of infinite Reynolds number solutions labeled by dimensionless parameters of the limiting Euler equations, in a manner intrinsic to the RNG itself. Large eddy simulations, with their Leonard stress and dynamic subgrid models, break this nonuniqueness and predict unique model coefficients on the basis of theory. In this sense large eddy simulations go beyond the RNG methodology, which does not in general predict model coefficients.
Electric Dipole Moments in TwoHiggsDoublet Models ; Electric dipole moments are extremely sensitive probes for additional sources of CP violation in new physics models. Specifically, they have been argued in the past to exclude new CPviolating phases in twoHiggsdoublet models. Since recently models including such phases have been discussed widely, we revisit the available constraints in the presence of mechanisms which are typically invoked to evade flavourchanging neutral currents. To that aim, we start by assessing the necessary calculations on the hadronic, nuclear and atomicmolecular level, deriving expressions with conservative error estimates. Their phenomenological analysis in the context of twoHiggsdoublet models yields strong constraints, in some cases weakened by a cancellation mechanism among contributions from neutral scalars. While the corresponding parameter combinations do not yet have to be unnaturally small, the constraints are likely to preclude large effects in other CPviolating observables. Nevertheless, the generically expected contributions to electric dipole moments in this class of models lie within the projected sensitivity of the nextgeneration experiments.
Neutron stars in Starobinsky model ; We study the structure of neutron stars in fRRalpha R2 theory of gravity Starobinsky model, in an exact and nonperturbative approach. In this model, apart from the standard General Relativistic junction conditions, two extra conditions, namely the continuity of the curvature scalar and its first derivative needs to be satisfied. For an exterior Schwarzschild solution, the curvature scalar and its derivative has to be zero at the stellar surface. We show that for some equation of state EoS of matter, matching all conditions at the surface of the star is impossible. Hence the model brings two major finetuning problems i only some particular classes of EoS are consistent with Schwarzschild at the surface and ii given that EoS, only a very particular set of boundary conditions at the centre of the star will satisfy the given boundary conditions at the surface. Hence we show that this model and subsequently many other fR models where uniqueness theorem is valid is highly unnatural, for the existence of compact astrophysical objects. This is because the EoS of a compact star should be completely determined by the physics of nuclear matter at high density and not the theory of gravity.
Theory of offnormal incidence ion sputtering of surfaces of type AxB1x and a conformal map method for stochastic continuum models ; Bradley et. al. recently provided an explanation of nanodot and defectfree ordered ripple production from binary compounds, for normal and oblique incidence ion sputtering respectively, by the inclusion of the effect of the preferential sputtering of one type of atom relative to the other type on the surfaces of binary compounds. In this paper, we propose an extended anisotropic model of such surfaces of type AxB1x that addresses anisotropy in the variations of the speciecomposition as well. We show that this model gives the anisotropic CuernoBarabasi model as x approaches unity, and analyze the general properties of the model. Further, the complexity of the solutions of nonlinear higherorder differential equations in general has led to the creation of great number of highly technical and computationally intensive numerical schemes. We introduce a simple conformal map method which allows for a fast and accurate simulation of the dynamical evolution of ionsputtered surfaces based on any stochastic continuum model. An optimization algorithm for an efficient application of the scheme is also introduced. In this scheme the noise term has a physical meaning which allows one to go beyond the usual white noise approximation by actually being able to assign physical parameters to it.
The inference of gene trees with species trees ; Molecular phylogeny has focused mainly on improving models for the reconstruction of gene trees based on sequence alignments. Yet, most phylogeneticists seek to reveal the history of species. Although the histories of genes and species are tightly linked, they are seldom identical, because genes duplicate, are lost or horizontally transferred, and because alleles can coexist in populations for periods that may span several speciation events. Building models describing the relationship between gene and species trees can thus improve the reconstruction of gene trees when a species tree is known, and viceversa. Several approaches have been proposed to solve the problem in one direction or the other, but in general neither gene trees nor species trees are known. Only a few studies have attempted to jointly infer gene trees and species trees. In this article we review the various models that have been used to describe the relationship between gene trees and species trees. These models account for gene duplication and loss, transfer or incomplete lineage sorting. Some of them consider several types of events together, but none exists currently that considers the full repertoire of processes that generate gene trees along the species tree. Simulations as well as empirical studies on genomic data show that combining gene treespecies tree models with models of sequence evolution improves gene tree reconstruction. In turn, these better gene trees provide a better basis for studying genome evolution or reconstructing ancestral chromosomes and ancestral gene sequences. We predict that gene treespecies tree methods that can deal with genomic data sets will be instrumental to advancing our understanding of genomic evolution.
A stochastic reorganizational bath model for electronic energy transfer ; The fluctuations of optical gap induced by the environment play crucial roles in electronic energy transfer dynamics. One of the simplest approaches to incorporate such fluctuations in energy transfer dynamics is the well known HakenStroblReineker model, in which the energygap fluctuation is approximated as a white noise. Recently, several groups have employed molecular dynamics simulations and excitedstate calculations in conjunction to take the thermal fluctuation of excitation energies into account. Here, we discuss a rigorous connection between the stochastic and the atomistic bath models. If the phonon bath is treated classically, time evolution of the excitonphonon system can be described by Ehrenfest dynamics. To establish the relationship between the stochastic and atomistic bath models, we employ a projection operator technique to derive the generalized Langevin equations for the energygap fluctuations. The stochastic bath model can be obtained as an approximation of the atomistic Ehrenfest equations via the generalized Langevin approach. Based on the connection, we propose a novel scheme to correct reorganization effects within the framework of stochastic models. The proposed scheme provides a better description of the population dynamics especially in the regime of strong excitonphonon coupling. Finally, we discuss the effect of the bath reorganization in the absorption and fluorescence spectra of ideal Jaggregates in terms of the Stokes shifts. For this purpose, we introduce a simple relationship that relates the reorganization contribution to the Stokes shifts the reorganization shift to three parameters the monomer reorganization energy, the relaxation time of the optical gap, and the exciton delocalization length. This simple relationship allows one to classify the origin of the Stokes shifts in molecular aggregates.
Physical modeling of the soil swelling curve vs. the shrinkage curve ; Physical understanding of the links between soil swelling, texture, structure, cracking, and sample size is of great interest for the physical understanding of many processes in the soilairwater system and for applications in civil, agricultural, and environmental engineering. The background of this work is an available chain of interconnected physical shrinkage curve models for clay, intraaggregate matrix, aggregated soil without cracks, and soil with cracks. The objective of the work is to generalize these models to the case of swelling, and to construct the physicalswellingmodel chain with a stepbystep transition from clay to aggregated soil with cracks. The generalization is based on thorough accounting for the analogies and differences between shrinkage and swelling and the corresponding use, modification, or replacement of the soil shrinkage features. Two specific soil swelling features to be used are i air entrapping in pores of the contributing clay; and ii aggregate destruction with the formation of new aggregate surfaces. The input for the prediction of the swelling curve of an aggregated soil coincides with that of the available model of the shrinkage curve. The analysis of available data on the maximum shrinkswell cycle of two soils with different texture and structure, accounting for sample size is conducted as applied to swelling curves and to the residual crack volume and maximumswellingvolume decrease after the shrinkswell cycle. Results of the analysis show evidence in favor of the swelling model chain.
When do we need to account for the geometric phase in excited state dynamics ; We investigate the role of the geometric phase GP in an internal conversion process when the system changes its electronic state by passing through a conical intersection CI. Local analysis of a twodimensional linear vibronic coupling LVC model Hamiltonian near the CI shows that the role of the GP is twofold. First, it compensates for a repulsion created by the socalled diagonal BornOppenheimer correction DBOC. Second, the GP enhances the nonadiabatic transition probability for a wavepacket part that experiences a central collision with the CI. To assess the significance of both GP contributions we propose two indicators that can be computed from parameters of electronic surfaces and initial conditions. To generalize our analysis to Ndimensional systems we introduce a reduction of a general Ndimensional LVC model to an effective 2D LVC model using a mode transformation that preserves shorttime dynamics of the original Ndimensional model. Using examples of the bismethylene adamantyl and butatriene cations, and the pyrazine molecule we have demonstrated that their effective 2D models reproduce the shorttime dynamics of the corresponding full dimensional models, and the introduced indicators are very reliable in assessing GP effects.
Hypothesis Testing for Parsimonious Gaussian Mixture Models ; Gaussian mixture models with eigendecomposed covariance structures make up the most popular family of mixture models for clustering and classification, i.e., the Gaussian parsimonious clustering models GPCM. Although the GPCM family has been used for almost 20 years, selecting the best member of the family in a given situation remains a troublesome problem. Likelihood ratio tests are developed to tackle this problems. These likelihood ratio tests use the heteroscedastic model under the alternative hypothesis but provide much more flexibility and realworld applicability than previous approaches that compare the homoscedastic Gaussian mixture versus the heteroscedastic one. Along the way, a novel maximum likelihood estimation procedure is developed for two members of the GPCM family. Simulations show that the chi2 reference distribution gives reasonable approximation for the LR statistics only when the sample size is considerable and when the mixture components are well separated; accordingly, following Lo 2008, a parametric bootstrap is adopted. Furthermore, by generalizing the idea of Greselin and Punzo 2013 to the clustering context, a closed testing procedure, having the defined likelihood ratio tests as local tests, is introduced to assess a unique model in the general family. The advantages of this likelihood ratio testing procedure are illustrated via an application to the wellknown Iris data set.
Modeling Bitcoin Contracts by Timed Automata ; Bitcoin is a peertopeer cryptographic currency system. Since its introduction in 2008, Bitcoin has gained noticeable popularity, mostly due to its following properties 1 the transaction fees are very low, and 2 it is not controlled by any central authority, which in particular means that nobody can print the money to generate inflation. Moreover, the transaction syntax allows to create the socalled contracts, where a number of mutuallydistrusting parties engage in a protocol to jointly perform some financial task, and the fairness of this process is guaranteed by the properties of Bitcoin. Although the Bitcoin contracts have several potential applications in the digital economy, so far they have not been widely used in real life. This is partly due to the fact that they are cumbersome to create and analyze, and hence risky to use. In this paper we propose to remedy this problem by using the methods originally developed for the computeraided analysis for hardware and software systems, in particular those based on the timed automata. More concretely, we propose a framework for modeling the Bitcoin contracts using the timed automata in the UPPAAL model checker. Our method is general and can be used to model several contracts. As a proofofconcept we use this framework to model some of the Bitcoin contracts from our recent previous work. We then automatically verify their security in UPPAAL, finding and correcting some subtle errors that were difficult to spot by the manual analysis. We hope that our work can draw the attention of the researchers working on formal modeling to the problem of the Bitcoin contract verification, and spark off more research on this topic.
On reaching the adiabatic limit in multifield inflation ; We calculate the scalar spectral index ns and the tensortoscalar ratio r in a class of recently proposed twofield noscale inflationary models in supergravity. We show that, in order to obtain correct predictions, it is crucial to take into account the coupling between the curvature and the isocurvature perturbations induced by the noncanonical form of the kinetic terms. This coupling enhances the curvature perturbation and suppresses the resulting tensortoscalar ratio to the per mille level even for values of the slowroll parameter epsilon sim 0.01. Beyond these particular models, we emphasise that multifield models of inflation are a priori not predictive, unless one supplies a prescription for the postinflationary era, or an adiabatic limit is reached before the end of inflation. We examine the conditions that enabled us to actually derive predictions in the models under study, by analysing the various contributions to the effective isocurvature mass in general twofield inflationary models. In particular, we point out a universal geometrical contribution that is important at the end of inflation, and which can be directly extracted from the inflationary Lagrangian, independently of a specific trajectory. Eventually, we point out that spectator fields can lead to oscillatory features in the timedependent power spectra at the end of inflation. We demonstrate how these features can be model semianalytically as well as the theoretical uncertainties they can entail.
Auxiliary Field Loop expansion for the Effective Action for Stochastic Partial Differential equations II ; We extend our discussion of effective actions for stochastic partial differential equations to systems that give rise to a MartinSiggiaRose MSR type of action. This type of action naturally arises when one uses the manybody formalism of Doi and Peliti to describe reactiondiffusion models which undergo transitions into the absorbing state and which are described by a Master equation. These models include predator prey models, and directed percolation models as well as chemical kinetic models. For classical dynamical systems with external noise it is always possible to construct an MSR action. Using a path integral representation for the generator of the correlation functions, we show how, by introducing a composite auxiliary field, one can generate an auxiliary field loop expansion for the effective action for both types of systems. As a specific example of the DoiPeliti formalism we determine the effective action for the chemical reaction annihilation and diffusion process AA rightarrow 0. For the external noise problem we evaluate the effective action for the ColeHopf form of the KardarParisi Zhang KPZ equation as well as for the Ginzburg Landau model of spin relaxation. We determine for arbitrary spatial dimension d, the renormalized effective potential in leading order in the auxiliary field loop expansion LOAF and also determine the renormalization group equation for the running of the reaction rate coupling constant for arbitrary d. We compare our results with known perturbative and nonperturbative results for the renormalization group equations.
Formation of starspots in selfconsistent global dynamo models Polar spots on cool stars ; Observations of cool stars reveal dark spotlike features on their surfaces. Compared to sunspots, starspots can be bigger or cover a larger fraction of the stellar surface. While sunspots appear only at low latitudes, starspots are also found in polar regions, in particular on rapidly rotating stars. Sunspots are believed to result from the eruption of magnetic fluxtubes rising from the deep interior of the Sun. The strong magnetic field locally reduces convective heat transport to the solar surface. Such fluxtube models have also been invoked to explain starspot properties. However, these models use several simplifications and so far the generation of either sunspots or starspots has not been demonstrated in a selfconsistent simulation of stellar magnetic convection. Here we show that direct numerical simulations of a distributed dynamo operating in a densitystratified rotating spherical shell can spontaneously generate cool spots. Convection in the interior of the model produces a large scale magnetic field which interacts with near surface granular convection leading to strong concentrations of magnetic flux and formation of starspots. Prerequisites for the formation of sizeable highlatitude spots in the model are sufficiently strong density stratification and rapid rotation. Our model presents an alternate mechanism for starspot formation by distributed dynamo action.
Signal of RightHanded Charged Gauge Bosons at the LHC ; We point out that the recent excess observed in searches for a righthanded gauge boson WR at CMS can be explained in a leftright symmetric model with D parity violation. In a class of SO10 models, in which D parity is broken at a high scale, the leftright gauge symmetry breaking scale is naturally small, and at a few TeV the gauge coupling constants satisfy gR 0.6 gL. Such models therefore predict a righthanded charged gauge boson WR in the TeV range with a suppressed gauge coupling as compared to the usually assumed manifest leftright symmetry case gR gL. The recent CMS data show excess events which are consistent with the cross section predicted in the D parity breaking model for 1.9 TeV MWR 2.4 TeV. If the excess is confirmed, it would in general be a direct signal of new physics beyond the Standard Model at the LHC. A TeV scale WR would for example not only rule out SU5 grand unified theory models. It would also imply BL violation at the TeV scale, which would be the first evidence for baryon or lepton number violation in nature and it has strong implications on the generation of neutrino masses and the baryon asymmetry in the Universe.
Quantifying the influence of conformational uncertainty in biomolecular solvation ; Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos gPC expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational active space random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance of the surrogate model by evaluating fluctuationinduced uncertainty in solventaccessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in highdimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.
Comparison of multifluid moment models with ParticleinCell simulations of collisionless magnetic reconnection ; We introduce an extensible multifluid moment model in the context of collisionless magnetic reconnection. This model evolves full Maxwell equations, and simultaneously moments of the VlasovMaxwell equation for each species in the plasma. Effects like electron inertia and pressure gradient are selfconsistently embedded in the resulting multifluid moment equations, without the need to explicitly solving a generalized Ohms's law. Two limits of the multifluid moment model are discussed, namely, the fivemoment limit that evolves a scalar pressures for each species, and the tenmoment limit that evolves the full anisotropic, nongyrotropic pressure tensor for each species. We first demonstrate, analytically and numerically, that the fivemoment model reduces to the widely used Hall Magnetohydrodynamics Hall MHD model under the assumptions of vanishing electron inertia, infinite speed of light, and quasineutrality. Then, we compare tenmoment and fully kinetic ParticleInCell PIC simulations of a large scale Harris sheet reconnection problem, where the tenmoment equations are closed with a local linear collisionless approximation for the heat flux. The tenmoment simulation gives reasonable agreement with the PIC results regarding the structures and magnitudes of the electron flows, the polarities and magnitudes of elements of the electron pressure tensor, and the decomposition of the generalized Ohm's law. Possible ways to improve the simple local closure towards a nonlocal fully threedimensional closure are also discussed.
Generalized Multiscale FiniteElement Method GMsFEM for elastic wave propagation in heterogeneous, anisotropic media ; It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finitedifference method and finiteelement method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale FiniteElement Method GMsFEM for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale medium property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finiteelement method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.
Towards a System Model for UML. The Structural Data Model ; In this document we introduce a system model as the basis for a semantic model for UML 2.0. The system model is supposed to form the core and foundation of the UML semantics definition. For that purpose the basic system is targeted towards UML. This document is structured as follows In the rest of Section 1 we will discuss the general approach and highlight the main decisions. This section is important to understand the rest of this document. Section 2 contains the actual definition of the structural part of the system model. It is built in layers as described in Section 1. For brevity of the approach, we defer deeper discussions into the Appendix in Section 4. This document is part of a project on the formalization of the UML 2.0 in cooperation between the Queens University Kingston and the Technische Universitaten Braunschweig and Munchen. This version 1.0 is the result of a longer effort to define the structure, behavior and interaction of objectoriented, possibly distributed systems abstract enough to be of general value, but also in sufficient detail for a semantic foundation of the UML. We also wish to thank external reviewers, and especially Gregor von Bochmann, Gregor Engels and S'ebastien G'erard for their help.
Technical Report A Methodology for Studying 802.11p VANET Broadcasting Performance with Practical Vehicle Distribution ; In a Vehicular Adhoc Network VANET, the performance of the communication protocol is influenced heavily by the vehicular density dynamics. However, most of the previous works on VANET performance modeling paid little attention to vehicle distribution, or simply assumed homogeneous car distribution. It is obvious that vehicles are distributed nonhomogeneously along a road segment due to traffic signals and speed limits at different portions of the road, as well as vehicle interactions that are significant on busy streets. In light of the inadequacy, we present in this paper an original methodology to study the broadcasting performance of 802.11p VANETs with practical vehicle distribution in urban environments. Firstly, we adopt the empirically verified stochastic traffic models, which incorporates the effect of urban settings such as traffic lights and vehicle interactions on car distribution and generates practical vehicular density profiles. Corresponding 802.11p protocol and performance models are then developed. When coupled with the traffic models, they can predict broadcasting efficiency, delay, as well as throughput performance of 802.11p VANETs based on the knowledge of car density at each location on the road. Extensive simulation is conducted to verify the accuracy of the developed mathematical models with the consideration of vehicle interaction. In general, our results demonstrate the applicability of the proposed methodology on modeling protocol performance in practical signalized road networks, and shed insights into the design and development of future communication protocols and networking functions for VANETs.
Models for the kmetric dimension ; For an undirected graph GV,E, a vertex x in V separates vertices u and v where u,v in V, u neq v if their distances to x are not equal. Given an integer parameter k geq 1, a set of vertices L subseteq V is a feasible solution if for every pair of distinct vertices, u,v, there are at least k distinct vertices x1,x2,...,xk in L each separating u and v. Such a feasible solution is called a landmark set, and the kmetric dimension of a graph is the minimal cardinality of a landmark set for the parameter k. The case k1 is a classic problem, where in its weighted version, each vertex v has a nonnegative weight, and the goal is to find a landmark set with minimal total weight. We generalize the problem for k geq 2, introducing two models, and we seek for solutions to both the weighted version and the unweighted version of this more general problem. In the model of allpairs AP, k separations are needed for every pair of distinct vertices of V, while in the nonlandmarks model NL, such separations are required only for pairs of distinct vertices in V setminus L. We study the weighted and unweighted versions for both models AP and NL, for path graphs, complete graphs, complete bipartite graphs, and complete wheel graphs, for all values of k geq 2. We present algorithms for these cases, thus demonstrating the difference between the two new models, and the differences between the cases k1 and k geq 2.
Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs ; A major challenge in nextgeneration industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized Polynomial Chaos PC. These methods allow to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution in different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained by the variability in the model input. We finally present a synthetic experiment studying the variance based sensitivity of the wave load on an offshore structure to a number of input uncertainties. In the numerical examples presented the PC methods have exhibited fast convergence, suggesting that the problem is amenable to being analyzed with such methods.
Synthesis from Formal Partial Abstractions ; Developing complex software systems is costly, timeconsuming and errorprone. Model driven development MDD promises to improve software productivity, timeliness, quality and cost through the transformation of abstract application models to codelevel implementations. However, it remains unreasonably difficult to build the modeling languages and translators required for software synthesis. This difficulty, in turns, limits the applicability of MDD, and makes it hard to achieve reliability in MDD tools. This dissertation research seeks to reduce the cost, broaden the applicability, and increase the quality of modeldriven development systems by embedding modeling languages within established formal languages and by using the analyzers provided with such languages for synthesis purposes to reduce the need for hand coding of translators. This dissertation, in particular, explores the proposed approach using relational logic as expressed in Alloy as the general specification language, and the Alloy Analyzer as the generalpurpose analyzer. Synthesis is thus driven by finitedomain constraint satisfaction. One important aspect of this work is its focus on partial specifications of particular aspects of the system, such as application architectures and target platforms, and synthesis of partial code bases from such specifications. Contributions of this work include novel insights, methods and tools for 1 synthesizing architectural models from abstract application models; 2 synthesizing partial, platformspecific application frameworks from application architectures; and 3 synthesizing objectrelational mapping tradeoff spaces and database schemas for databasebacked objectoriented applications.
Model of deep nonvolcanic tremor part II episodic tremor and slip ; Bursts of tremor accompany a moving slip pulse in Episodic Tremor and Slip ETS events. The sources of this nonvolcanic tremor NVT are largely unknown. We have developed a model describing the mechanism of NTV generation. According to this model, NTV is a reflection of resonanttype oscillations excited in a fault at certain depth ranges. From a mathematical viewpoint, tremor phonons and slip pulses solitons are two different solutions of the sineGordon equation describing frictional processes inside a fault. In an ETS event, a moving slip pulse generates tremor due to interaction with structural heterogeneities in a fault and to failures of small asperities. Observed tremor parameters, such as central frequency and frequency attenuation curve, are associated with fault parameters and conditions, such as elastic modulus, effective normal stress, penetration hardness and friction. Model prediction of NTV frequency content is consistent with observations. In the framework of this model it is possible to explain the complicated pattern of tremor migration, including rapid tremor propagation and reverse tremor migration. Migration along the strike direction is associated with movement of the slip pulse. Rapid tremor propagation in the slipparallel direction is associated with movement of kinks along a 2D slip pulse. A slip pulse, pinned in some places, can fragment into several pulses, causing tremor associated with some of these pulse fragments to move opposite to the main propagation direction. The model predicts that the frequency content of tremor during an ETS event is slightly different from the frequency content of ambient tremor and tremor triggered by earthquakes.
Twisted spectral triple for the Standard Model and spontaneous breaking of the Grand Symmetry ; Grand symmetry models in noncommutative geometry have been introduced to explain how to generate minimally i.e. without adding new fermions an extra scalar field beyond the standard model, which both stabilizes the electroweak vacuum and makes the computation of the mass of the Higgs compatible with its experimental value. In this paper, we use ConnesMoscovici twisted spectral triples to cure a technical problem of the grand symmetry, that is the appearance together with the extra scalar field of unbounded vectorial terms. The twist makes these terms bounded and thanks to a twisted version of the firstorder condition that we introduce here also permits to understand the breaking to the standard model as a dynamical process induced by the spectral action. This is a spontaneous breaking from a pregeometric PatiSalam model to the almostcommutative geometry of the standard model, with two Higgslike fields scalar and vector.
On the Universality of Jordan Centers for Estimating Infection Sources in Tree Networks ; Finding the infection sources in a network when we only know the network topology and infected nodes, but not the rates of infection, is a challenging combinatorial problem, and it is even more difficult in practice where the underlying infection spreading model is usually unknown a priori. In this paper, we are interested in finding a source estimator that is applicable to various spreading models, including the SusceptibleInfected SI, SusceptibleInfectedRecovered SIR, SusceptibleInfectedRecoveredInfected SIRI, and SusceptibleInfectedSusceptible SIS models. We show that under the SI, SIR and SIRI spreading models and with mild technical assumptions, the Jordan center is the infection source associated with the most likely infection path in a tree network with a single infection source. This conclusion applies for a wide range of spreading parameters, while it holds for regular trees under the SIS model with homogeneous infection and recovery rates. Since the Jordan center does not depend on the infection, recovery and reinfection rates, it can be regarded as a universal source estimator. We also consider the case where there are k1 infection sources, generalize the Jordan center definition to a kJordan center set, and show that this is an optimal infection source set estimator in a tree network for the SI model. Simulation results on various general synthetic networks and real world networks suggest that Jordan centerbased estimators consistently outperform the betweenness, closeness, distance, degree, eigenvector, and pagerank centrality based heuristics, even if the network is not a tree.
Towards a possible solution for the coincidence problem fG gravity as background ; In this article we address the wellknown cosmic coincidence problem in the framework of the fG gravity. In order to achieve this, an interaction between dark energy and dark matter is considered. A setup is designed and a constraint equation is obtained which generates the fG models that do not suffer from the coincidence problem. Due to the absence of a universally accepted interaction term introduced by a fundamental theory, the study is conducted over three different forms of logically chosen interaction terms. To illustrate the setup three widely known models of fG gravity are taken into consideration and the problem is studied under the designed setup. The study reveals that the popular fG gravity models does not approve of a satisfactory solution of the long standing coincidence problem, thus proving to be a major setback for them as successful models of universe. Finally, two nonconventional models of fG gravity have been proposed and studied in the framework of the designed setup. It is seen that a complete solution of the coincidence problem is achieved for these models. The study also reveals that the binteraction term is much more preferable compared to the other interactions, due to its greater compliance with the recent observational data.
A consistent hierarchy of generalized kinetic equation approximations to the chemical master equation applied to surface catalysis ; We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finiterange spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one and twodimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations, are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods KMC for onedimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a twodimensional model for the oxidation of CO on RuO2110, showing that loworder truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.
Chaotic Inflation from Nonlinear Sigma Models in Supergravity ; We present a common solution to the puzzles of the light Higgs or quark masses and the need for a shift symmetry and large field values in high scale chaotic inflation. One way to protect, for example, the Higgs from a large supersymmetric mass term is if it is the NambuGoldstone boson NGB of a nonlinear sigma model. However, it is well known that nonlinear sigma models NLSMs with nontrivial Kahler transformations are problematic to couple to supergravity. An additional field is necessary to make the Kahler potential of the NLSM invariant in supergravity. This field must have a shift symmetry making it a candidate for the inflaton or axion. We give an explicit example of such a model for the coset space SU3SU2 times U1, with the Higgs as the NGB, including breaking the inflaton's shift symmetry and producing a chaotic inflation potential. This construction can also be applied to other models, such as one based on E7SO10 times U1 times U1 which incorporates the first two generations of light quarks as the NambuGoldstone multiplets, and has an axion in addition to the inflaton. Along the way we clarify and connect previous work on understanding NLSMs in supergravity and the origin of the extra field which is the inflaton here, including a connection to WittenBagger quantization. This framework has wide applications to model building; a light particle from a NLSM requires, in supergravity, exactly the structure for chaotic inflaton or an axion.
ChernSimonslike Theories of Gravity ; In this PhD thesis, we investigate a wide class of threedimensional massive gravity models and show how most of them if not all can be brought in a firstorder, ChernSimonslike, formulation. This allows for a general analysis of the Hamiltonian for this wide class of models. From the ChernSimonslike perspective, the known higherderivative theories of 3D massive gravity, like Topologically Massive Gravity and New Massive Gravity, can be extended to a wider class of models. These models are shown to be free of possibly ghostlike scalar excitations and exhibit improved behavior with respect to Antide Sitter holography; the new models have regions in their parameter space where positive boundary central charge is compatible with positive mass and energy for the massive spin2 mode. We discuss the construction of several of these improved models in detail and derive the necessary constraints needed to remove any unphysical degree of freedom. We also comment on the AdSLCFT correspondence which arises when the massive spin2 mode becomes massless and is replaced by a logarithmic mode. Most of the results have been published elsewhere, however, a special effort is made here to present the aspects of ChernSimonslike theories in a pedagogical and comprehensive way.
Critical correlation functions for the 4dimensional weakly selfavoiding walk and ncomponent varphi4 model ; We extend and apply a rigorous renormalisation group method to study critical correlation functions, on the 4dimensional lattice mathbbZ4, for the weakly coupled ncomponent varphi4 spin model for all n geq 1, and for the continuoustime weakly selfavoiding walk. For the varphi4 model, we prove that the critical twopoint function has x2 Gaussian decay asymptotically, for n ge 1. We also determine the asymptotic decay of the critical correlations of the squares of components of varphi, including the logarithmic corrections to Gaussian scaling, for n geq 1. The above extends previously known results for n 1 to all n ge 1, and also observes new phenomena for n 1, all with a new method of proof. For the continuoustime weakly selfavoiding walk, we determine the decay of the critical generating function for the watermelon network consisting of p weakly mutually and selfavoiding walks, for all p ge 1, including the logarithmic corrections. This extends a previously known result for p 1, for which there is no logarithmic correction, to a much more general setting. In addition, for both models, we study the approach to the critical point and prove existence of logarithmic corrections to scaling for certain correlation functions. Our method gives a rigorous analysis of the weakly selfavoiding walk as the n 0 case of the varphi4 model, and provides a unified treatment of both models, and of all the above results.
The Unity of Cosmological Attractors ; Recently, several broad classes of inflationary models have been discovered whose cosmological predictions are stable with respect to significant modifications of the inflaton potential. Some classes of models are based on a nonminimal coupling to gravity. These models, which we will call xiattractors, describe universal cosmological attractors including Higgs inflation and induced inflation models. Another class describes conformal attractors including Starobinsky inflation and Tmodels and their generalization to alphaattractors. The aim of this paper is to elucidate the common denominator of these models their attractor properties stem from a pole of order two in the kinetic term of the inflaton field in the Einstein frame formulation, prior to switching to the canonical variables. We point out that alpha and universal attractors differ in the subleading corrections to the kinetic term. As a final step towards unification of xi and alpha attractors, we introduce a special class of xiattractors which is fully equivalent to alphaattractors with the identification alpha 11over 6xi. There is no theoretical lower bound on r in this class of models.
Representing Data by a Mixture of Activated Simplices ; We present a new model which represents data as a mixture of simplices. Simplices are geometric structures that generalize triangles. We give a simple geometric understanding that allows us to learn a simplicial structure efficiently. Our method requires that the data are unit normalized and thus lie on the unit sphere. We show that under this restriction, building a model with simplices amounts to constructing a convex hull inside the sphere whose boundary facets is close to the data. We call the boundary facets of the convex hull that are close to the data Activated Simplices. While the total number of bases used to build the simplices is a parameter of the model, the dimensions of the individual activated simplices are learned from the data. Simplices can have different dimensions, which facilitates modeling of inhomogeneous data sources. The simplicial structure is bounded this is appropriate for modeling data with constraints, such as human elbows can not bend more than 180 degrees. The simplices are easy to interpret and extremes within the data can be discovered among the vertices. The method provides good reconstruction and regularization. It supports good nearest neighbor classification and it allows realistic generative models to be constructed. It achieves stateoftheart results on benchmark datasets, including 3D poses and digits.
Modeling the dynamics of tidallyinteracting binary neutron stars up to merger ; The data analysis of the gravitational wave signals emitted by coalescing neutron star binaries requires the availability of an accurate analytical representation of the dynamics and waveforms of these systems. We propose an effectiveonebody EOB model that describes the general relativistic dynamics of neutron star binaries from the early inspiral up to merger. Our EOB model incorporates an enhanced attractive tidal potential motivated by recent analytical advances in the postNewtonian and gravitational selfforce description of relativistic tidal interactions. No fitting parameters are introduced for the description of tidal interaction in the late, strongfield dynamics. We compare the model energetics and the gravitational wave phasing with new highresolution multiorbit numerical relativity simulations of equalmass configurations with different equations of state. We find agreement within the uncertainty of the numerical data for all configurations. Our model is the first semianalytical model which captures the tidal amplification effects close to merger. It thereby provides the most accurate analytical representation of binary neutron star dynamics and waveforms currently available.
Electronic health record phenotyping improves detection and screening of type 2 diabetes in the general United States population A crosssectional, unselected, retrospective study ; Objectives In the United States, 25 of people with type 2 diabetes are undiagnosed. Conventional screening models use limited demographic information to assess risk. We evaluated whether electronic health record EHR phenotyping could improve diabetes screening, even when records are incomplete and data are not recorded systematically across patients and practice locations. Methods In this crosssectional, retrospective study, data from 9,948 US patients between 2009 and 2012 were used to develop a prescreening tool to predict current type 2 diabetes, using multivariate logistic regression. We compared 1 a full EHR model containing prescribed medications, diagnoses, and traditional predictive information, 2 a restricted EHR model where medication information was removed, and 3 a conventional model containing only traditional predictive information BMI, age, gender, hypertensive and smoking status. We additionally used a randomforests classification model to judge whether including additional EHR information could increase the ability to detect patients with Type 2 diabetes on new patient samples. Results Using a patient's full or restricted EHR to detect diabetes was superior to using basic covariates alone p0.001. The random forests model replicated on outofbag data. Migraines and cardiac dysrhythmias were negatively associated with type 2 diabetes, while acute bronchitis and herpes zoster were positively associated, among other factors. Conclusions EHR phenotyping resulted in markedly superior detection of type 2 diabetes in a general US population, could increase the efficiency and accuracy of disease screening, and are capable of picking up signals in realworld records.
New Supersoft Supersymmetry Breaking Operators and a Solution to the Problem ; We propose the framework, generalized supersoft supersymmetry breaking. Supersoft models, with Dtype supersymmetry breaking and heavy Dirac gauginos, are considerably less constrained by the LHC searches than the well studied MSSM. These models also ameliorate the supersymmetric flavor and CP problems. However, previously considered mechanisms for obtaining a natural size Higgsino mass parameter namely, mu in supersoft models have been relatively complicated and contrived. Obtaining a 125gev for the mass of the lightest Higgs boson has also been difficult. Additional issues with the supersoft scenario arise from the fact that these models contain new scalars in the adjoint representation of the standard model, which may obtain negative squaredmasses, breaking color and generating too large a Tparameter. In this work we introduce new operators into supersoft models which can potentially solve all these issues. A novel feature of this framework is that the new muterm can give unequal masses to the up and down type Higgs fields, and the Higgsinos can be much heavier than the Higgs boson without finetuning. However, unequal Higgs and Higgsino masses also remove some attractive features of supersoft susy.
Dark Matter Constraints on Composite Higgs Models ; In composite Higgs models the pseudoNambuGoldstone Boson pNGB nature of the Higgs field is an interesting alternative for explaning the smallness of the electroweak scale with respect to the beyond the Standard Model scale. In nonminimal models additional pNGB states are present and can be a Dark Matter DM candidate, if there is an approximate symmetry suppressing their decay. Here we assume that the low energy effective theory for scales much below the compositeness scale corresponds to the Standard Model with a pNGB Higgs doublet and a pNGB DM multiplet. We derive general effective DM Lagrangians for several possible DM representations under the SM gauge group, including the singlet, doublet and triplet cases. Within this framework we discuss how the DM observables relic abundance, direct and indirect detection constrain the dimension6 operators induced by the strong sector assuming that DM behaves as a Weakly Interacting Particle WIMP and that the relic abundance is settled through the freezeout mechanism. We also apply our general results to two specific cosets SO6SO5 and SO6SO4 times SO2, which contain a singlet and doublet DM candidate, respectively. In particular we show that if compositeness is a solution to the little hierarchy problem, representations larger than the triplet are strongly disfavored. Furthermore, we find that composite models can have viable DM candidates with much smaller direct detection crosssections than their noncomposite counterparts, making DM detection much more challenging.
Leptogenesis in an SU5 x A5 Golden Ratio Flavour Model ; In this paper we discuss a minor modification of a previous SU5 x A5 flavour model which exhibits at leading order golden ratio mixing and sum rules for the heavy and the light neutrino masses. Although this model could predict all mixing angles well it fails in generating a sufficient large baryon asymmetry via the leptogenesis mechanism. We repair this deficit here, discuss model building aspects and give analytical estimates for the generated baryon asymmetry before we perform a numerical parameter scan. Our setup has only a few parameters in the lepton sector. This leads to specific constraints and correlations between the neutrino observables. For instance, we find that in the model considered only the neutrino mass spectrum with normal mass ordering and values of the lightest neutrino mass in the interval 1018 meV are compatible with the current data on the neutrino oscillation parameters. With the introduction of only one NLO operator, the model can accommodate successfully simultaneously even at 1sigma level the current data on neutrino masses, on neutrino mixing and the observed value of the baryon asymmetry.
Accidental Symmetries and the Conformal Bootstrap ; We study an cal N 2 supersymmetric generalization of the threedimensional critical ON vector model that is described by N1 chiral superfields with superpotential W g1 X sumi Zi2 g2 X3. By combining the tools of the conformal bootstrap with results obtained through supersymmetric localization, we argue that this model exhibits a symmetry enhancement at the infrared superconformal fixed point due to g2 flowing to zero. This example is special in that the existence of an infrared fixed point with g1,g2neq 0, which does not exhibit symmetry enhancement, does not generally lead to any obvious unitarity violations or other inconsistencies. We do show, however, that the Ftheorem excludes the models with g1,g2neq 0 for N5. The conformal bootstrap provides a stronger constraint and excludes such models for N2. We provide evidence that the g20 models, which have the enhanced ONtimes U1 symmetry, come close to saturating the bootstrap bounds. We extend our analysis to fractional dimensions where we can motivate the nonexistence of the g1,g2neq 0 models by studying them perturbatively in the 4epsilon expansion.
Bayesian Estimation Under Informative Sampling ; Bayesian analysis is increasingly popular for use in social science and other application areas where the data are observations from an informative sample. An informative sampling design leads to inclusion probabilities that are correlated with the response variable of interest. Model inference performed on the observed sample taken from the population will be biased for the population generative model under informative sampling since the balance of information in the sample data is different from that for the population. Typical approaches to account for an informative sampling design under Bayesian estimation are often difficult to implement because they require reparameterization of the hypothesized generating model, or focus on design, rather than modelbased, inference. We propose to construct a pseudoposterior distribution that utilizes sampling weights based on the marginal inclusion probabilities to exponentiate the likelihood contribution of each sampled unit, which weights the information in the sample back to the population. Our approach provides a nearly automated estimation procedure applicable to any model specified by the data analyst for the population and retains the population model parameterization and posterior sampling geometry. We construct conditions on known marginal and pairwise inclusion probabilities that define a class of sampling designs where L1 consistency of the pseudo posterior is guaranteed. We demonstrate our method on an application concerning the Bureau of Labor Statistics Job Openings and Labor Turnover Survey.