text
stringlengths
62
2.94k
Supersymmetry Breaking and Inflation from Higher Curvature Supergravity ; The generic embedding of the RR2 higher curvature theory into oldminimal supergravity leads to models with rich vacuum structure in addition to its wellknown inflationary properties. When the model enjoys an exact Rsymmetry, there is an inflationary phase with a single supersymmetric Minkowski vacuum. This appears to be a special case of a more generic setup, which in principle may include Rsymmetry violating terms which are still of pure supergravity origin. By including the latter terms, we find new supersymmetry breaking vacua compatible with singlefield inflationary trajectories. We discuss explicitly two such models and we illustrate how the inflaton is driven towards the supersymmetry breaking vacuum after the inflationary phase. In these models the gravitino mass is of the same order as the inflaton mass. Therefore, pure higher curvature supergravity may not only accommodate the proper inflaton field, but it may also provide the appropriate hidden sector for supersymmetry breaking after inflation has ended.
Modeling Creativity Case Studies in Python ; Modeling Creativity doctoral dissertation, 2013 explores how creativity can be represented using computational approaches. Our aim is to construct computer models that exhibit creativity in an artistic context, that is, that are capable of generating or evaluating an artwork visual or linguistic, an interesting new idea, a subjective opinion. The research was conducted in 20082012 at the Computational Linguistics Research Group CLiPS, University of Antwerp under the supervision of Prof. Walter Daelemans. Prior research was also conducted at the Experimental Media Research Group EMRG, St. Lucas University College of Art Design Antwerp under the supervision of Lucas Nijs. Modeling Creativity examines creativity in a number of different perspectives from its origins in nature, which is essentially blind, to humans and machines, and from generating creative ideas to evaluating and learning their novelty and usefulness. We will use a handson approach with case studies and examples in the Python programming language.
Dynamical generation of the weak and Dark Matter scales from strong interactions ; Assuming that mass scales arise in nature only via dimensional transmutation, we extend the dimensionless Standard Model by adding vectorlike fermions charged under a new strong gauge interaction. Their nonperturbative dynamics generates a mass scale that is transmitted to the elementary Higgs boson by electroweak gauge interactions. In its minimal version the model has the same number of parameters as the Standard Model, predicts that the electroweak symmetry gets broken, predicts newphysics in the multiTeV region and is compatible with all existing bounds, provides two Dark Matter candidates stable thanks to accidental symmetries a composite scalar in the adjoint of SU2L and a composite singlet fermion; their thermal relic abundance is predicted to be comparable to the measured cosmological DM abundance. Some models of this type allow for extra Yukawa couplings; DM candidates remain even if explicit masses are added.
A Riemannian approach to the membrane limit in nonEuclidean elasticity ; NonEuclidean, or incompatible elasticity is an elastic theory for prestressed materials, which is based on a modeling of the elastic body as a Riemannian manifold. In this paper we derive a dimensionallyreduced model of the socalled membrane limit of a thin incompatible body. By generalizing classical dimension reduction techniques to the Riemannian setting, we are able to prove a general theorem that applies to an elastic body of arbitrary dimension, arbitrary slender dimension, and arbitrary metric. The limiting model implies the minimization of an integral functional defined over immersions of a limiting submanifold in Euclidean space. The limiting energy only depends on the first derivative of the immersion, and for frameindifferent models, only on the resulting pullback metric induced on the submanifold, i.e., there are no bending contributions.
Electron beam induced current in the high injection regime ; Electron beam induced current EBIC is a powerful technique which measures the charge collection efficiency of photovoltaics with submicron spatial resolution. The exciting electron beam results in a high generation rate density of electronhole pairs, which may drive the system into nonlinear regimes. An analytic model is presented which describes the EBIC response when the it total electronhole pair generation rate exceeds the rate at which carriers are extracted by the photovoltaic cell, and charge accumulation and screening occur. The model provides a simple estimate of the onset of the high injection regime in terms of the material resistivity and thickness, and provides a straightforward way to predict the EBIC lineshape in the high injection regime. The model is verified by comparing its predictions to numerical simulations in 1 and 2 dimensions. Features of the experimental data, such as the magnitude and position of maximum collection efficiency versus electron beam current, are consistent with the 3 dimensional model.
Nonminimally Coupled Tachyon Field in Teleparallel Gravity ; We perform a full investigation on dynamics of a new dark energy model in which the fourderivative of a noncanonical scalar field tachyon is nonminimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a latetime scaling attractor point P4, corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field quintessence is used instead of tachyon field.
Dynamical DTerms in Supergravity ; Most phenomenological models of supersymmetry breaking rely on nonzero Fterms rather than nonzero Dterms. An important reason why Dterms are often neglected is that it turns out to be very challenging to realize Dterms at energies parametrically smaller than the Planck scale in supergravity. As we demonstrate in this paper, all conventional difficulties may, however, be overcome if the generation of the Dterm is based on strong dynamics. To illustrate our idea, we focus on a certain class of vectorlike SUSY breaking models that enjoy a minimal particle content and which may be easily embedded into more complete scenarios. We are then able to show that, upon gauging a global flavor symmetry, an appropriate choice of Yukawa couplings readily allows to dynamically generate a Dterm at an almost arbitrary energy scale. This includes in particular the natural and consistent realization of Dterms around, above and below the scale of grand unification in supergravity, without the need for finetuning of any model parameters. Our construction might therefore bear the potential to open up a new direction for model building in supersymmetry and early universe cosmology.
RHOMOLO A Dynamic Spatial General Equilibrium Model for Assessing the Impact of Cohesion Policy ; The paper presents the newly developed dynamic spatial general equilibrium model of European Commission, RHOMOLO. The model incorporates several elements from economic geography in a novel and theoretically consistent way. It describes the location choice of different types of agents and captures the interplay between agglomeration and dispersion forces in determining the spatial equilibrium. The model is also dynamic as it allows for the accumulation of factors of production, human capital and technology. This makes RHOMOLO well suited for simulating policy scenario related to the EU cohesion policy and for the analysis of its impact on the regions and the Member States of the union.
R Package multgee A Generalized Estimating Equations Solver for Multinomial Responses ; The R package multgee implements the local odds ratios generalized estimating equations GEE approach proposed by Touloumis et al. 2013, a GEE approach for correlated multinomial responses that circumvents theoretical and practical limitations of the GEE method. A main strength of multgee is that it provides GEE routines for both ordinal ordLORgee and nominal nomLORgee responses, while relevant softwares in R and SAS are restricted to ordinal responses under a marginal cumulative link model specification. In addition, multgee offers a marginal adjacent categories logit model for ordinal responses and a marginal baseline category logit model for nominal. Further, utility functions are available to ease the local odds ratios structure selection intrinsic.pars and to perform a Wald type goodnessoffit test between two nested GEE models waldts. We demonstrate the application of multgee through a clinical trial with clustered ordinal multinomial responses.
UltraViolet Freezein ; If dark matter is thermally decoupled from the visible sector, the observed relic density can potentially be obtained via freezein production of dark matter. Typically in such models it is assumed that the dark matter is connected to the thermal bath through feeble renormalisable interactions. Here, rather, we consider the case in which the hidden and visible sectors are coupled only via nonrenormalisable operators. This is arguably a more generic realisation of the dark matter freezein scenario, as it does not require the introduction of diminutive renormalisable couplings. We examine general aspects of freezein via nonrenormalisable operators in a number of toy models and present several motivated implementations in the context of Beyond the Standard Model physics. Specifically, we study models related to the PecceiQuinn mechanism and Z' portals.
qGaussian model of default ; We present the qGaussian generalization of the Merton framework, which takes into account slow fluctuations of the volatility of the firms market value of financial assets. The minimal version of the model depends on the Tsallis entropic parameter q and the generalized distance to default. The empirical foundation and implications of the model are illustrated by the study of 645 North American industrial firms during the financial crisis, 2006 2012. All defaulters in the sample have exceptionally large, corresponding to unusually fattailed unconditional distributions of logassetreturns. Using Receiver Operating Characteristic curves, we demonstrate the high forecasting power of the model in prediction of 1year defaults. Our study suggests that the level of complexity of the realized time series, quantified by q, should be taken into account to improve valuations of default risk.
A general holographic metalsuperconductor phase transition model ; We study the scalar condensation of a general holographic superconductor model in AdS black hole background away from the probe limit. We find the model parameters together with the scalar mass and backreaction can determine the order of phase transitions completely. In addition, we observe two types of discontinuities of the scalar operator in the case of first order phase transitions. We analyze in detail the effects of the scalar mass and backreaction on the formation of discontinuities and arrive at an approximate relation between the threshold model parameters. Furthermore, we obtain superconductor solutions corresponding to higher energy states and examine the stability of these superconductor solutions.
State and Parameter Estimation of Partially Observed Linear Ordinary Differential Equations with Deterministic Optimal Control ; Ordinary Differential Equations are a simple but powerful framework for modeling complex systems. Parameter estimation from times series can be done by Nonlinear Least Squares or other classical approaches, but this can give unsatisfactory results because the inverse problem can be illposed, even when the differential equation is linear. Following recent approaches that use approximate solutions of the ODE model, we propose a new method that converts parameter estimation into an optimal control problem our objective is to determine a control and a parameter that are as close as possible to the data. We derive then a criterion that makes a balance between discrepancy with data and with the model, and we minimize it by using optimization in functions spaces our approach is related to the socalled Deterministic Kalman Filtering, but different from the usual statistical Kalman filtering. e show the rootn consistency and asymptotic normality of the estimators for the parameter and for the states. Experiments in a toy model and in a real case shows that our approach is generally more accurate and more reliable than Nonlinear Least Squares and Generalized Smoothing, even in misspecified cases.
A Generative Model of Software Dependency Graphs to Better Understand Software Evolution ; Software systems are composed of many interacting elements. A natural way to abstract over software systems is to model them as graphs. In this paper we consider software dependency graphs of objectoriented software and we study one topological property the degree distribution. Based on the analysis of ten software systems written in Java, we show that there exists completely different systems that have the same degree distribution. Then, we propose a generative model of software dependency graphs which synthesizes graphs whose degree distribution is close to the empirical ones observed in real software systems. This model gives us novel insights on the potential fundamental rules of software evolution.
Notes on Noise Contrastive Estimation and Negative Sampling ; Estimating the parameters of probabilistic models of language such as maxent models and probabilistic neural models is computationally difficult since it involves evaluating partition functions by summing over an entire vocabulary, which may be millions of word types in size. Two closely related strategiesnoise contrastive estimation Mnih and Teh, 2012; Mnih and Kavukcuoglu, 2013; Vaswani et al., 2013 and negative sampling Mikolov et al., 2012; Goldberg and Levy, 2014have emerged as popular solutions to this computational problem, but some confusion remains as to which is more appropriate and when. This document explicates their relationships to each other and to other estimation techniques. The analysis shows that, although they are superficially similar, NCE is a general parameter estimation technique that is asymptotically unbiased, while negative sampling is best understood as a family of binary classification models that are useful for learning word representations but not as a generalpurpose estimator.
Classical, quantum, and phenomenological aspects of dark energy models ; The origin of accelerating expansion of the Universe is one the biggest conundrum of fundamental physics. In this paper we review vacuum energy issues as the origin of accelerating expansion generally called dark energy and give an overview of alternatives, which a large number of them can be classified as interacting scalar field models. We review properties of these models both as classical field and as quantum condensates in the framework of nonequilibrium quantum field theory. Finally, we review phenomenology of models with the goal of discriminating between them.
A Bayesian Multivariate Functional Dynamic Linear Model ; We present a Bayesian approach for modeling multivariate, dependent functional data. To account for the three dominant structural features in the datafunctional, time dependent, and multivariate componentswe extend hierarchical dynamic linear models for multivariate time series to the functional data setting. We also develop Bayesian spline theory in a more general constrained optimization framework. The proposed methods identify a timeinvariant functional basis for the functional observations, which is smooth and interpretable, and can be made common across multivariate observations for additional information sharing. The Bayesian framework permits joint estimation of the model parameters, provides exact inference up to MCMC error on specific parameters, and allows generalized dependence structures. Sampling from the posterior distribution is accomplished with an efficient Gibbs sampling algorithm. We illustrate the proposed framework with two applications 1 multieconomy yield curve data from the recent global recession, and 2 local field potential brain signals in rats, for which we develop a multivariate functional time series approach for multivariate timefrequency analysis. Supplementary materials, including R code and the multieconomy yield curve data, are available online.
SublinearTime Approximate MCMC Transitions for Probabilistic Programs ; Probabilistic programming languages can simplify the development of machine learning techniques, but only if inference is sufficiently scalable. Unfortunately, Bayesian parameter estimation for highly coupled models such as regressions and statespace models still scales poorly; each MCMC transition takes linear time in the number of observations. This paper describes a sublineartime algorithm for making MetropolisHastings MH updates to latent variables in probabilistic programs. The approach generalizes recently introduced approximate MH techniques instead of subsampling data items assumed to be independent, it subsamples edges in a dynamically constructed graphical model. It thus applies to a broader class of problems and interoperates with other generalpurpose inference techniques. Empirical results, including confirmation of sublinear pertransition scaling, are presented for Bayesian logistic regression, nonlinear classification via joint Dirichlet process mixtures, and parameter estimation for stochastic volatility models with state estimation via particle MCMC. All three applications use the same implementation, and each requires under 20 lines of probabilistic code.
Relations between different notions of degrees of freedom of a quantum system and its classical model ; There are at least three different notions of degrees of freedom DF that are important in comparison of quantum and classical dynamical systems. One is related to the type of dynamical equations and inequivalent initial conditions, the other to the structure of the system and the third to the properties of dynamical orbits. In this paper, definitions and comparison in classical and quantum systems of the tree types of DF are formulated and discussed. In particular, we concentrate on comparison of the number of the so called dynamical DF in a quantum system and its classical model. The comparison involves analyzes of relations between integrability of the classical model, dynamical symmetry and separability of the quantum and the corresponding classical systems and dynamical generation of appropriately defined quantumness. The analyzes is conducted using illustrative typical systems. A conjecture summarizing the observed relation between generation of quantumness by the quantum dynamics and dynamical properties of the classical model is formulated.
On Flavour and Naturalness of Composite Higgs Models ; We analyse the interplay of the constraints imposed on flavoursymmetric Composite Higgs models by Naturalness considerations and the constraints derived from Flavour Physics and Electroweak Precision Tests. Our analysis is based on the Effective Field Theory which describes the Higgs as a pseudoNambuGoldstone boson and also includes the composite fermionic resonances. Within this approach one is able to identify the directions in the parameter space where the U3symmetric flavour models can pass the current experimental constraints, without conflicting with the light Higgs mass. We also derive the general features of the U2symmetric models required by the experimental bounds, in case of elementary and totally composite tR. An effect in the Z bar b b coupling, which can potentially allow for sizable deviations in Z to bar b b decay parameters without modifying flavour physics observables, is identified. We also present an analysis of the mixed scenario, where the top quark mass is generated due to Partial Compositeness while the light quark masses are Technicolorlike.
A numerical approach to model independently reconstruct fR functions through cosmographic data ; The challenging issue of determining the correct fR among several possibilities is here revised by means of numerical reconstructions of the modified Friedmann equations around the redshift interval zin0,1. Frequently, a severe degeneracy between fR approaches occurs, since different paradigms correctly explain present time dynamics. To set the initial conditions on the fR functions, we involve the use of the so called cosmography of the Universe, i.e. the technique of fixing constraints on the observable Universe by comparing expanded observables with current data. This powerful approach is essentially model independent and correspondingly we got a model independent reconstruction of fz classes within the interval zin0,1. To allow the Hubble rate to evolve around zleq1, we considered three relevant frameworks of effective cosmological dynamics, i.e. the LambdaCDM model, the CPL parametrization and a polynomial approach to dark energy. Finally cumbersome algebra permits to pass from fz to fR and the general outcome of our work is the determination of a viable fR function, that effectively describes the observed Universe dynamics.
Functional clustering in nested designs Modeling variability in reproductive epidemiology studies ; We discuss functional clustering procedures for nested designs, where multiple curves are collected for each subject in the study. We start by considering the application of standard functional clustering tools to this problem, which leads to groupings based on the average profile for each subject. After discussing some of the shortcomings of this approach, we present a mixture model based on a generalization of the nested Dirichlet process that clusters subjects based on the distribution of their curves. By using mixtures of generalized Dirichlet processes, the model induces a much more flexible prior on the partition structure than other popular modelbased clustering methods, allowing for different rates of introduction of new clusters as the number of observations increases. The methods are illustrated using hormone profiles from multiple menstrual cycles collected for women in the Early Pregnancy Study.
Inflation, de Sitter Landscape and SuperHiggs effect ; We continue developing cosmological models involving nilpotent chiral superfields, which provide a simple unified description of inflation and the current acceleration of the universe in the supergravity context. We describe here a general class of models with a positive cosmological constant at the minimum of the potential, such that supersymmetry is spontaneously broken in the direction of the nilpotent superfield S. In the unitary gauge, these models have a simple action where all highly nonlinear fermionic terms of the classical VolkovAkulov action disappear. We present masses for bosons and fermions in these theories. By a proper choice of parameters in this class of models, one can fit any possible set of the inflationary parameters ns and r, a broad range of values of the vacuum energy V0, which plays the role of the dark energy, and achieve a controllable level of supersymmetry breaking. This can be done without introducing light moduli, such as Polonyi fields, which often lead to cosmological problems in phenomenological supergravity.
Varying constants quantum cosmology ; We discuss minisuperspace models within the framework of varying physical constants theories including Lambdaterm. In particular, we consider the varying speed of light VSL theory and varying gravitational constant theory VG using the specific ansatze for the variability of constants ca c0 an and GaG0 aq. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe from nothing a0 to a Friedmann geometry with the scale factor at is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content cosmological term, dust, radiation universe models.
Cosmological Evolution of Statistical System of Scalar Charged Particles ; In the paper we consider the macroscopic model of plasma of scalar charged particles, obtained by means of the statistical averaging of the microscopic equations of particle dynamics in a scalar field. On the basis of kinetic equations, obtained from averaging, and their strict integral consequences, a selfconsistent set of equations is formulated which describes the selfgravitating plasma of scalar charged particles. It was obtained the corresponding closed cosmological model which also was numerically simulated for the case of onecomponent degenerated Fermi gas and twocomponent Boltzmann system. It was shown that results depend weakly on the choice of a statistical model. Two specific features of cosmological evolution of a statistical system of scalar charged particles were obtained with respect to cosmological evolution of the minimal interaction models appearance of giant bursts of invariant cosmological acceleration Omega at the time interval 8cdot103div2cdot104 tPl and strong heating 3div 8 orders of magnitude of a statistical system at the same times. The presence of such features can modify the quantum theory of generation of cosmological gravitational perturbations.
Nonlocal gravity and comparison with observational datasets ; We study the cosmological predictions of two recently proposed nonlocal modifications of General Relativity. Both models have the same number of parameters as LambdaCDM, with a mass parameter m replacing the cosmological constant. We implement the cosmological perturbations of the nonlocal models into a modification of the CLASS Boltzmann code, and we make a full comparison to CMB, BAO and supernova data. We find that the nonlocal models fit these datasets as well as LambdaCDM. For both nonlocal models parameter estimation using PlanckJLABAO data gives a value of H0 higher than in LambdaCDM, and in better agreement with the values obtained from local measurements.
An Abstract Interpretationbased Model of Tracing JustInTime Compilation ; Tracing justintime compilation is a popular compilation technique for the efficient implementation of dynamic languages, which is commonly used for JavaScript, Python and PHP. We provide a formal model of tracing JIT compilation of programs using abstract interpretation. Hot path detection corresponds to an abstraction of the trace semantics of the program. The optimization phase corresponds to a transform of the original program that preserves its trace semantics up to an observation modeled by some abstraction. We provide a generic framework to express dynamic optimizations and prove them correct. We instantiate it to prove the correctness of dynamic type specialization and constant variable folding. We show that our framework is more general than the model of tracing compilation introduced by Guo and Palsberg 2011 based on operational bisimulations.
Prophet A Speculative Multithreading Execution Model with Architectural Support Based on CMP ; Speculative multithreading SpMT has been proposed as a perspective method to exploit Chip Multiprocessors CMP hardware potential. It is a thread level speculation TLS model mainly depending on software and hardware codesign. This paper researches speculative threadlevel parallelism of general purpose programs and a speculative multithreading execution model called Prophet is presented. The architectural support for Prophet execution model is designed based on CMP. In Prophet the interthread data dependency are predicted by precomputation slice pslice to reduce RAW violation. Prophet multiversioning Cache system along with thread state control mechanism in architectural support are utilized for buffering the speculative data, and a snooping bus based cache coherence protocol is used to detect data dependence violation. The simulationbased evaluation shows that the Prophet system could achieve significant speedup for generalpurpose programs.
Superbounce and Loop Quantum Cosmology Ekpyrosis from Modified Gravity ; As is known, in modified cosmological theories of gravity many of the cosmologies which could not be generated by standard Einstein gravity, can be consistently described by FR theories. Using known reconstruction techniques, we investigate which FR theories can lead to a Hubble parameter describing two types of cosmological bounces, the superbounce model, related to supergravity and nonsupersymmetric models of contracting ekpyrosis and also the Loop Quantum Cosmology modified ekpyrotic model. Since our method is an approximate method, we investigate the problem at large and small curvatures. As we evince, both models yield power law reconstructed FR gravities, with the most interesting new feature being that both lead to accelerating cosmologies, in the large curvature approximation. The mathematical properties of the some FriedmannRobertsonWalker spacetimes M, that describe superbouncelike cosmologies are also pointed out, with regards to the group of curvature collineations CCM.
Quantum integrability in the multistate LandauZener problem ; We analyze Hamiltonians linear in the time variable for which the multistate LandauZener problem is known to have an exact solution. We show that they either belong to families of mutually commuting Hamiltonians polynomial in time or reduce to the 2 x 2 LandauZener problem, which is considered trivially integrable. The former category includes the equal slope, bowtie, and generalized bowtie models. For each of these models we explicitly construct the corresponding families of commuting matrices. The equal slope model is a member of an integrable family that consists of the maximum possible number for a given matrix size of commuting matrices linear in time. The bowtie model belongs to a previously unknown, similarly maximal family of quadratic commuting matrices. We thus conjecture that quantum integrability understood as the existence of nontrivial parameterdependent commuting partners is a necessary condition for the LandauZener solvability. Descendants of the 2 x 2 LandauZener Hamiltonian are e.g. general SU2 and SU1,1 Hamiltonians, timedependent linear chain, linear, nonlinear, and double oscillators. We explicitly obtain solutions to all these LandauZener problems from the 2 x 2 case.
Qualitative analysis and characterization of two cosmologies including scalar fields ; The problem of dark energy can be roughly stated as the proposition and validation of a cosmological model that can explain the phenomenon of the accelerated expansion of the Universe. This problem is an open discussion topic in modern physics. One of the most common approaches is that of the Dark Energy DE, a matter component still unknown, with repulsive character to explain the accelerated expansion, which fills about 23 of the total content of the Universe. In this thesis are investigated two cosmological models, a nonminimally coupled quintessence field, based on a ScalarTensor Theory of gravity, formulated in the Einstein's frame, and a quintom dark energy model, based on General Relativity. A normalization and parametrization procedure is introduced for each model, in order to investigate the flow properties of an associated autonomous system of ordinary differential equations. In our study are combined topological, analytical and numerical techniques. We are mainly interested in the past dynamics. However, some results concerning the intermediate and future dynamics are discussed. The mathematical results obtained have an immediate interpretation in the cosmological context.
Variational Recurrent AutoEncoders ; In this paper we propose a model that combines the strengths of RNNs and SGVB the Variational Recurrent AutoEncoder VRAE. Such a model can be used for efficient, large scale unsupervised learning on time series data, mapping the time series data to a latent vector representation. The model is generative, such that data can be generated from samples of the latent space. An important contribution of this work is that the model can make use of unlabeled data in order to facilitate supervised training of RNNs by initialising the weights and network state.
Model inftycategories I some pleasant properties of the inftycategory of simplicial spaces ; Both simplicial sets and simplicial spaces are used pervasively in homotopy theory as presentations of spaces, where in both cases we extract the underlying space by taking geometric realization. We have a good handle on the category of simplicial sets in this capacity; this is due to the existence of a suitable model structure thereon, which is particularly convenient to work with since it enjoys the technical properties of being proper and of being cofibrantly generated. This paper is devoted to showing that, if one is willing to work inftycategorically, then one can manipulate simplicial spaces exactly as one manipulates simplicial sets. Precisely, this takes the form of a proper, cofibrantly generated model structure on the inftycategory of simplicial spaces, the definition of which we also introduce here.
Cosmic expansion and structure formation in running vacuum cosmologies ; We investigate the dynamics of the FLRW flat cosmological models in which the vacuum energy varies with redshift. A particularly well motivated model of this type is the socalled quantum field vacuum, in which both kind of terms H2 and constant appear in the effective dark energy density affecting the evolution of the main cosmological functions at the background and perturbation levels. Specifically, it turns out that the functional form of the quantum vacuum endows the vacuum energy of a mild dynamical evolution which could be observed nowadays and appears as dynamical dark energy. Interestingly, the lowenergy behaviour is very close to the usual LambdaCDM model, but it is by no means identical. Finally, within the framework of the quantum field vacuum we generalize the large scale structure properties, namely growth of matter perturbations, cluster number counts and spherical collapse model.
GL3Based Quantum Integrable Composite Models. I. Bethe Vectors ; We consider a composite generalized quantum integrable model solvable by the nested algebraic Bethe ansatz. Using explicit formulas of the action of the monodromy matrix elements onto Bethe vectors in the GL3based quantum integrable models we prove a formula for the Bethe vectors of composite model. We show that this representation is a particular case of general coproduct property of the weight functions Bethe vectors found in the theory of the deformed KnizhnikZamolodchikov equation.
A model for massless higher spin field interacting with a geometrical background ; We study a very general four dimensional Field Theory model describing the dynamics of a massless higher spin N symmetric tensor field particle interacting with a geometrical background.This model is invariant under the action of an extended linear diffeomorphism. We investigate the consistency of the equations of motion, and the highest spin degrees of freedom are extracted by means of a set of covariant constraints. Moreover the the highest spin equations of motions and in general all the highest spin field 1PI irreducible Green functions are invariant under a chain of transformations induced by a set of N2 Ward operators, while the auxiliary fields equations of motion spoil this symmetry. The first steps to a quantum extension of the model are discussed on the basis of the Algebraic Field Theory.Technical aspects are reported in Appendices; in particular one of them is devoted to illustrate the spin2 case.
A new FRgravity model ; We propose a new model of modified FR gravity theory with the function FR 1beta arcsinbeta R. Constant curvature solutions corresponding to the flat and de Sitter spacetime are obtained. The Jordan and Einstein frames are considered; the potential and the mass of the scalar degree of freedom are found. We show that the flat spacetime is stable and the de Sitter spacetime is unstable. The slowroll parameters epsilon, eta, and the efold number of the model are evaluated in the Einstein frame. The index of the scalar spectrum powerlaw ns and the tensortoscalar ratio r are calculated. Critical points of autonomous equations for the de Sitter phase and the matter dominated epoch are found and studied. We obtain the approximate solution of equations of motion which is the deviation from the de Sitter phase in the Jordan frame. It is demonstrated that the model passes the matter stability test.
Assisting V2V failure recovery using DevicetoDevice Communications ; This paper aims to propose a new solution for failure recovery deadends in Vehicle to Vehicle V2V communications through LTEassisted DevicetoDevice communications D2D. Based on the enhanced networking capabilities offered by Intelligent Transportation Systems ITS architecture, our solution can efficiently assist V2V communications in failure recovery situations. We also derive an analytical model to evaluate generic V2V routing recovery failures. Moreover, the proposed hybrid model is simulated and compared to the generic model under different constrains of worst and best cases of D2D discovery and communication. According to our comparison and simulation results, the hybrid model decreases the delay for alarm message propagation to the destination typically the Traffic Control Center TCC through the Road Side Unit RSU
An efficient particlebased online EM algorithm for general statespace models ; Estimating the parameters of general statespace models is a topic of importance for many scientific and engineering disciplines. In this paper we present an online parameter estimation algorithm obtained by casting our recently proposed particlebased, rapid incremental smoother PaRIS into the framework of online expectationmaximization EM for statespace models proposed by Capp'e 2011. Previous such particlebased implementations of online EM suffer typically from either the wellknown degeneracy of the genealogical particle paths or a quadratic complexity in the number of particles. However, by using the computationally efficient and numerically stable PaRIS algorithm for estimating smoothed expectations of timeaveraged sufficient statistics of the model we obtain a fast algorithm with very limited memory requirements and a computational complexity that grows only linearly with the number of particles. The efficiency of the algorithm is illustrated in a simulation study.
ExtremeStrike Asymptotics for General Gaussian Stochastic Volatility Models ; We consider a stochastic volatility asset price model in which the volatility is the absolute value of a continuous Gaussian process with arbitrary prescribed mean and covariance. By exhibiting a KarhunenLoeve expansion for the integrated variance, and using sharp estimates of the density of a general secondchaos variable, we derive asymptotics for the asset price density for large or small values of the variable, and study the wing behavior of the implied volatility in these models. Our main result provides explicit expressions for the first five terms in the expansion of the implied volatility. The expressions for the leading three terms are simple, and based on three basic spectraltype statistics of the Gaussian process the top eigenvalue of its covariance operator, the multiplicity of this eigenvalue, and the L2 norm of the projection of the mean function on the top eigenspace. The fourth term requires knowledge of all eigenelements. We present detailed numerics based on realistic liquidity assumptions in which classical and longmemory volatility models are calibrated based on our expansion.
Gravitational Lensing by SelfDual Black Holes in Loop Quantum Gravity ; We study gravitational lensing by a recently proposed black hole solution in Loop Quantum Gravity. We highlight the fact that the quantum gravity corrections to the Schwarzschild metric in this model evade the mass suppression' effects that the usual quantum gravity corrections are susceptible to by virtue of one of the parameters in the model being dimensionless, which is unlike any other quantum gravity motivated parameter. Gravitational lensing in the strong and weak deflection regimes is studied and a sample consistency relation is presented which could serve as a test of this model. We discuss that though the consistency relation for this model is qualitatively similar to what would have been in BransDicke, in general it can be a good discriminator between many alternative theories. Although the observational prospects do not seem to be very optimistic even for a galactic supermassive black hole case, time delay between relativistic images for billion solar mass black holes in other galaxies might be within reach of future relativistic lensing observations.
Pseudoscalar Portal Dark Matter ; A fermion dark matter candidate with a relic abundance set by annihilation through a pseudoscalar can evade constraints from direct detection experiments. We present simplified models that realize this fact by coupling a fermion dark sector to a twoHiggs doublet model. These models are generalizations of mixed binoHiggsino dark matter in the MSSM, with more freedom in the couplings and scalar spectra. Annihilation near a pseudoscalar resonance allows a significant amount of parameter space for thermal relic dark matter compared to singletdoublet dark matter, in which the fermions couple only to the SM Higgs doublet. In a general twoHiggs doublet model, there is also freedom for the pseudoscalar to be relatively light and it is possible to obtain thermal relic dark matter candidates even below 100 GeV. In particular, we find ample room to obtain dark matter with mass around 50 GeV and fitting the Galactic Center excess in gammarays. This region of parameter space can be probed by LHC searches for heavy pseudoscalars or electroweakinos, and possibly by other new collider signals.
Planck constraints on inflation in auxiliary vector modified fR theories ; We show that the universal alphaattractor models of inflation can be realized by including an auxiliary vector field Amu for the Starobinsky model with the Lagrangian fRRR26M2. If the same procedure is applied to general modified fR theories in which the Ricci scalar R is replaced by RAmu Amubeta nablamuAmu with constant beta, we obtain the BransDicke theory with a scalar potential and the BransDicke parameter omegarm BDbeta24. We also place observational constraints on inflationary models based on auxiliary vector modified fR theories from the latest Planck measurements of the Cosmic Microwave Background CMB anisotropies in both temperature and polarization. In the modified Starobinsky model, we find that the parameter beta is constrained to be beta25 68,,confidence level from the bounds of the scalar spectral index and the tensortoscalar ratio.
TiMEx A Waiting Time Model for Mutually Exclusive Groups of Cancer Alterations ; Despite recent technological advances in genomic sciences, our understanding of cancer progression and its driving genetic alterations remains incomplete. Here, we introduce TiMEx, a generative probabilistic model for detecting patterns of various degrees of mutual exclusivity across genetic alterations, which can indicate pathways involved in cancer progression. TiMEx explicitly accounts for the temporal interplay between the waiting times to alterations and the observation time. In simulation studies, we show that our model outperforms previous methods for detecting mutual exclusivity. On largescale biological datasets, TiMEx identifies gene groups with strong functional biological relevance, while also proposing many new candidates for biological validation. TiMEx possesses several advantages over previous methods, including a novel generative probabilistic model of tumorigenesis, direct estimation of the probability of mutual exclusivity interaction, computational efficiency, as well as high sensitivity in detecting gene groups involving lowfrequency alterations. R code is available at www.cbg.bsse.ethz.chsoftwareTiMEx.
Equation of State in a Generalized Relativistic Density Functional Approach ; The basic concepts of a generalized relativistic density functional approach to the equation of state of dense matter are presented. The model is an extension of relativistic meanfield models with densitydependent couplings. It includes explicit cluster degrees of freedom. The formation and dissolution of nuclei is described with the help of mass shifts. The model can be adapted to the description of finite nuclei in order to study the effect of alphaparticle correlations at the nuclear surface on the neutron skin thickness of heavy nuclei. Further extensions of the model to include quark degrees of freedom or an energy dependence of the nucleon selfenergies are outlined.
Pairwiselike models for nonMarkovian epidemics on networks ; In this letter, a generalization of pairwise models to nonMarkovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations DDEs, which shows excellent agreement with results based on explicit stochastic simulations of nonMarkovian epidemics on networks. Furthermore, we analytically compute a new R0like threshold quantity and an implicit analytical relation between this and the final epidemic size. In addition we show that the pairwise model and the analytic calculations can be generalized in terms of integrodifferential equations to any distribution of the infectious period, and we illustrate this by presenting a closed form expression for the final epidemic size. By showing the rigorous mathematical link between nonMarkovian network epidemics and pairwise DDEs, we provide the framework for a deeper and more rigorous understanding of the impact of nonMarkovian dynamics with explicit results for final epidemic size and threshold quantities.
Invariance principles for operatorscaling Gaussian random fields ; Recently, Hammond and Sheffield introduced a model of correlated random walks that scale to fractional Brownian motions with longrange dependence. In this paper, we consider a natural generalization of this model to dimension dgeq 2. We define a mathbb Zdindexed random field with dependence relations governed by an underlying random graph with vertices mathbb Zd, and we study the scaling limits of the partial sums of the random field over rectangular sets. An interesting phenomenon appears depending on how fast the rectangular sets increase along different directions, different random fields arise in the limit. In particular, there is a critical regime where the limit random field is operatorscaling and inherits the full dependence structure of the discrete model, whereas in other regimes the limit random fields have at least one direction that has either invariant or independent increments, no longer reflecting the dependence structure in the discrete model. The limit random fields form a general class of operatorscaling Gaussian random fields. Their increments and path properties are investigated.
Rebuilding Factorized Information Criterion Asymptotically Accurate Marginal Likelihood ; Factorized information criterion FIC is a recently developed approximation technique for the marginal loglikelihood, which provides an automatic model selection framework for a few latent variable models LVMs with tractable inference algorithms. This paper reconsiders FIC and fills theoretical gaps of previous FIC studies. First, we reveal the core idea of FIC that allows generalization for a broader class of LVMs, including continuous LVMs, in contrast to previous FICs, which are applicable only to binary LVMs. Second, we investigate the model selection mechanism of the generalized FIC. Our analysis provides a formal justification of FIC as a model selection criterion for LVMs and also a systematic procedure for pruning redundant latent variables that have been removed heuristically in previous studies. Third, we provide an interpretation of FIC as a variational free energy and uncover a few previouslyunknown their relationships. A demonstrative study on Bayesian principal component analysis is provided and numerical experiments support our theoretical results.
A new magnitudedependent ETAS model for earthquakes ; We propose a new version of the ETAS model, which we also analyze theoretically. As for the standard ETAS model, we assume the GutenbergRichter law as a probability density function for background events' magnitude. Instead, the magnitude of triggered shocks is assumed to be probabilistically dependent on the triggering events' magnitude. To this aim, we propose a suitable probability density function. This function is such that, averaging on all triggering events' magnitudes, we obtain again the GutenbergRichter law. This ensures the validity of this law at any event's generation when ignoring past seismicity. The probabilistic dependence between the magnitude of triggered events' and the one of the corresponding triggering shock is motivated by some results of a statistical analysis of some Italian catalogues. The proposed model has been also theoretically analyzed here. In particular, we focus on the interevent time which plays a very important role in the assessment of seismic hazard. Using the tool of the probability generating function and the Palm theory, we derive the density of interevent time for small values.
Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projectionbased reduced order models for the compressible NavierStokes equations ; For a projectionbased reduced order model ROM of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projectionbased fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full nonlinear compressible NavierStokes equations in specific volume form as a step toward a more general formulation for problems with generic nonlinearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and the full order model from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. The reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channeldriven cavity flow problem.
Dynamical generation of the PecceiQuinn scale in gauge mediation ; The PecceiQuinn PQ mechanism provides an elegant solution to the strong CP problem. However astrophysical constraints on axions require the PQ breaking scale to be far higher than the electroweak scale. In supersymmetric models the PQ symmetry can be broken at an acceptable scale if the effective potential for the pseudomodulus in the axion multiplet develops a minimum at large enough field values. In this work we classify systematically hadronic axion models in the context of gauge mediation and study their effective potentials at one loop. We find that some models generate a PQ scale comparable to the messenger scale. Our result may prove useful for constructing full realistic models of gauge mediation that address the strong CP problem. We also comment briefly on the cosmological aspects related to saxion and axino, and on the quality of the PQ symmetry.
Neutrinoantineutrino Mass Splitting in the Standard Model Neutrino Oscillation and Baryogenesis ; By adding a neutrino mass term to the Standard Model, which is Lorentz and SU2times U1 invariant but nonlocal to evade CPT theorem, it is shown that nonlocality within a distance scale of the Planck length, that may not be fatal to unitarity in generic effective theory, can generate the neutrinoantineutrino mass splitting of the order of observed neutrino mass differences, which is tested in oscillation experiments, and nonnegligible baryon asymmetry depending on the estimate of sphaleron dynamics. The oneloop order induced electronpositron mass splitting in the Standard Model is shown to be finite and estimated at sim 1020 eV, well below the experimental bound 102 eV. The induced CPT violation in the Kmeson in the Standard Model is expected to be even smaller and well below the experimental bound mKmbarK0.44times 1018 GeV.
Relativistic electromagnetic mass models in spherically symmetric spacetime ; Under the static spherically symmetric EinsteinMaxwell spacetime of embedding class one we explore possibility of electromagnetic mass model where mass and other physical parameters have purely electromagnetic origin Tiwari 1984, Gautreau 1985, Gron 1985. This work is in continuation of our earlier investigation Maurya 2015a where we developed an algorithm and found out three new solutions of electromagnetic mass models. In the present letter we consider different metric potentials nu and lambda and analyzed them in a systematic way. It is observed that some of the previous solutions related to electromagnetic mass models are nothing but special cases of the presently obtained generalized solution set. We further verify the solution set and show that these are extremely applicable in the case of compact stars as well as for understanding structure of the electron.
Intrinsic Nonstationary Covariance Function for Climate Modeling ; Designing a covariance function that represents the underlying correlation is a crucial step in modeling complex natural systems, such as climate models. Geospatial datasets at a global scale usually suffer from nonstationarity and nonuniformly smooth spatial boundaries. A Gaussian process regression using a nonstationary covariance function has shown promise for this task, as this covariance function adapts to the variable correlation structure of the underlying distribution. In this paper, we generalize the nonstationary covariance function to address the aforementioned global scale geospatial issues. We define this generalized covariance function as an intrinsic nonstationary covariance function, because it uses intrinsic statistics of the symmetric positive definite matrices to represent the characteristic length scale and, thereby, models the local stochastic process. Experiments on a synthetic and real dataset of relative sea level changes across the world demonstrate improvements in the error metrics for the regression estimates using our newly proposed approach.
Timedependent toroidal compactification proposals and the Bianchi type II model classical and quantum solutions ; In this work we construct an effective fourdimensional model by compactifying a tendimensional theory of gravity coupled with a real scalar dilaton field on a timedependent torus without the contributions of fluxes as first approximation. This approach is applied to anisotropic cosmological Bianchi type II model for which we study the classical coupling of the anisotropic scale factors with the two real scalar moduli produced by the compactification process. Also, we present some solutions to the corresponding WheelerDeWitt WDW equation in the context of Standard Quantum Cosmology and we claim that these quantum solution are generic in the moduli scalar field for all Bianchi Class A models. Also we gives the relation to these solutions for asymptotic behavior to large argument in the corresponding quantum solution in the gravitational variables and is compared with the Bohm's solutions, finding that this corresponds to lowestorder WKB approximation.
Shearfree Anisotropic Cosmological Models in fR Gravity ; We study a class of shearfree, homogeneous but anisotropic cosmological models with imperfect matter sources in the context of fR gravity. We show that the anisotropic stresses are related to the electric part of the Weyl tensor in such a way that they balance each other. We also show that within the class of orthogonal fR models, small perturbations of shear are damped, and that the electric part of the Weyl tensor and the anisotropic stress tensor decay with the expansion as well as the heat flux of the curvature fluid. Specializing in locally rotationally symmetric spacetimes in orthonormal frames, we examine the latetime behaviour of the de Sitter universe in fR gravity. For the Starobinsky model of fR, we study the evolutionary behavior of the Universe by numerically integrating the Friedmann equation, where the initial conditions for the expansion, acceleration and jerk parameters are taken from observational data.
Trajectory generation for multicontact momentumcontrol ; Simplified models of the dynamics such as the linear inverted pendulum model LIPM have proven to perform well for biped walking on flat ground. However, for more complex tasks the assumptions of these models can become limiting. For example, the LIPM does not allow for the control of contact forces independently, is limited to coplanar contacts and assumes that the angular momentum is zero. In this paper, we propose to use the full momentum equations of a humanoid robot in a trajectory optimization framework to plan its center of mass, linear and angular momentum trajectories. The model also allows for planning desired contact forces for each endeffector in arbitrary contact locations. We extend our previous results on LQR design for momentum control by computing the linearized optimal momentum feedback law in a receding horizon fashion. The resulting desired momentum and the associated feedback law are then used in a hierarchical whole body control approach. Simulation experiments show that the approach is computationally fast and is able to generate plans for locomotion on complex terrains while demonstrating good tracking performance for the full humanoid control.
Family Gauge Boson Production at the LHC ; Family gauge boson production at the LHC is investigated according to a U3 family gauge model with twisted family number assignment. In the model we study, a family gauge boson with the lowest mass, A1 1, interacts only with the first generation leptons and the third generation quarks. The family numbers are assigned, for example, as e1, e2, e3 e, mu, tau and d1, d2, d3b, d, s or d1, d2, d3b, s, d. In the model, the family gauge coupling constant is fixed by relating to the electroweak gauge coupling constant. Thus measurements of production cross sections and branching ratios of A1 1 clearly confirm or rule out the model. We calculate the cross sections of inclusive A1 1 production and b barb , t bart associated A1 1 production at sqrts 14textTeV and 100textTeV. With the dielectron production cross section, we discuss the determination of diagonalizing matrix of quark mass matrix, Uu and Ud, respectively.
Classical scale invariance in the inert doublet model ; The inert doublet model IDM is a minimal extension of the Standard Model SM that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a U1textCW gauge symmetry and a complex scalar Phi. The mass scale is generated in the hidden sector via the ColemanWeinberg CW mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the Z' gauge boson on the calculation of the dark matter relic density and on the spinindependent nucleon cross section for direct detection experiments. Finally, by studying the RG equations we find regions in parameter space which remain valid all the way up to the Planck scale.
Nucleon Electric Dipole Moments in HighScale Supersymmetric Models ; The electric dipole moments EDMs of electron and nucleons are promising probes of the new physics. In generic highscale supersymmetric SUSY scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CPviolating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the highscale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic highscale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the highscale SUSY models.
A Holistic Approach in Embedded System Development ; We present pState, a tool for developing complex embedded systems by integrating validation into the design process. The goal is to reduce validation time. To this end, qualitative and quantitative properties are specified in system models expressed as pCharts, an extended version of hierarchical state machines. These properties are specified in an intuitive way such that they can be written by engineers who are domain experts, without needing to be familiar with temporal logic. From the system model, executable code that preserves the verified properties is generated. The design is documented on the model and the documentation is passed as comments into the generated code. On the series of examples we illustrate how models and properties are specified using pState.
Everything You Always Wanted to Know About Dark Matter Elastic Scattering Through Higgs Loops But Were Afraid to Ask ; We consider a complete list of simplified models in which Majorana dark matter particles annihilate at tree level to hh or hZ final states, and calculate the loopinduced elastic scattering cross section with nuclei in each case. Expressions for these annihilation and elastic scattering cross sections are provided, and can be easily applied to a variety of UV complete models. We identify several phenomenologically viable scenarios, including dark matter that annihilates through the schannel exchange of a spinzero mediator or through the tchannel exchange of a fermion. Although the elastic scattering cross sections predicted in this class of models are generally quite small, XENON1T and LZ should be sensitive to significant regions of this parameter space. Models in which the dark matter annihilates to hh or hZ can also generate a gammaray signal that is compatible with the excess observed from the Galactic Center.
Dark Matter and Global Symmetries ; General considerations in general relativity and quantum mechanics are known to potentially rule out continuous global symmetries in the context of any consistent theory of quantum gravity. Assuming the validity of such considerations, we derive stringent bounds from gammaray, Xray, cosmicray, neutrino, and CMB data on models that invoke global symmetries to stabilize the dark matter particle. We compute uptodate, robust modelindependent limits on the dark matter lifetime for a variety of Planckscale suppressed dimensionfive effective operators. We then specialize our analysis and apply our bounds to specific models including the TwoHiggsDoublet, LeftRight, Singlet Fermionic, ZeeBabu, 331 and Radiative SeeSaw models. Assuming that i global symmetries are broken at the Planck scale, that ii the nonrenormalizable operators mediating dark matter decay have O1 couplings, that iii the dark matter is a singlet field, and that iv the dark matter density distribution is well described by a NFW profile, we are able to rule out fermionic, vector, and scalar dark matter candidates across a broad mass range keVTeV, including the WIMP regime.
Fundamental Composite 2HDM SUN with 4 flavours ; We present a new model of composite Higgs based on a gauged SUN group with 4 Dirac fermions in the fundamental representation. At low energy, the model has a global symmetry SU4timesSU4 broken to the diagonal SU4, containing 2 Higgs doublets in the coset. We study in detail the generation of the top mass via 4fermion interactions, and the issue of the vacuum alignment. In particular, we prove that, without loss of generality, the vacuum can always be aligned with one doublet. Under certain conditions on the top preYukawas, the second doublet, together with the additional triplets, is stable and can thus play the role of Dark Matter. This model can therefore be an example of composite inert2HDM model.
The effective field theory of Kmouflage ; We describe Kmouflage models of modified gravity using the effective field theory of dark energy. We show how the Lagrangian density K defining the Kmouflage models appears in the effective field theory framework, at both the exact fully nonlinear level and at the quadratic order of the effective action. We find that Kmouflage scenarios only generate the operator delta g00un at each order n. We also reverse engineer Kmouflage models by reconstructing the whole effective field theory, and the full cosmological behaviour, from two functions of the Jordanframe scale factor in a tomographic manner. This parameterisation is directly related to the implementation of the Kmouflage screening mechanism screening occurs when K' is large in a dense environment such as the deep matter and radiation eras. In this way, Kmouflage can be easily implemented as a calculable subclass of models described by the effective field theory of dark energy which could be probed by future surveys.
piecewiseSEM Piecewise structural equation modeling in R for ecology, evolution, and systematics ; Ecologists and evolutionary biologists are relying on an increasingly sophisticated set of statistical tools to describe complex natural systems. One such tool that has gained increasing traction in the life sciences is structural equation modeling SEM, a variant of path analysis that resolves complex multivariate relationships among a suite of interrelated variables. SEM has historically relied on covariances among variables, rather than the values of the data points themselves. While this approach permits a wide variety of model forms, it limits the incorporation of detailed specifications. Here, I present a fullydocumented, opensource R package piecewiseSEM that builds on the base R syntax for all current generalized linear, leastsquare, and mixed effects models. I also provide two worked examples one involving a hierarchical dataset with nonnormally distributed variables, and a second involving phylogeneticallyindependent contrasts. My goal is to provide a userfriendly and tractable implementation of SEM that also reflects the ecological and methodological processes generating data.
Sample Efficient Path Integral Control under Uncertainty ; We present a datadriven optimal control framework that can be viewed as a generalization of the path integral PI control approach. We find iterative feedback control laws without parameterization based on probabilistic representation of learned dynamics model. The proposed algorithm operates in a forwardbackward manner which differentiate from other PIrelated methods that perform forward sampling to find optimal controls. Our method uses significantly less samples to find optimal controls compared to other approaches within the PI control family that relies on extensive sampling from given dynamics models or trials on physical systems in modelfree fashions. In addition, the learned controllers can be generalized to new tasks without resampling based on the compositionality theory for the linearlysolvable optimal control framework. We provide experimental results on three different systems and comparisons with stateoftheart modelbased methods to demonstrate the efficiency and generalizability of the proposed framework.
Functional generalized autoregressive conditional heteroskedasticity ; Heteroskedasticity is a common feature of financial time series and is commonly addressed in the model building process through the use of ARCH and GARCH processes. More recently multivariate variants of these processes have been in the focus of research with attention given to methods seeking an efficient and economic estimation of a large number of model parameters. Due to the need for estimation of many parameters, however, these models may not be suitable for modeling now prevalent highfrequency volatility data. One potentially useful way to bypass these issues is to take a functional approach. In this paper, theory is developed for a new functional version of the generalized autoregressive conditionally heteroskedastic process, termed fGARCH. The main results are concerned with the structure of the fGARCH1,1 process, providing criteria for the existence of a strictly stationary solutions both in the space of squareintegrable and continuous functions. An estimation procedure is introduced and its consistency verified. A small empirical study highlights potential applications to intraday volatility estimation.
Unitarity bound in the most general two Higgs doublet model ; We investigate unitarity bounds in the most general two Higgs doublet model without a discrete Z2 symmetry nor CP conservation. Swave amplitudes for twobody elastic scatterings of NambuGoldstone bosons and physical Higgs bosons are calculated at high energies for all possible initial and final states 14 neutral, 8 singlycharged and 3 doublycharged states. We obtain analytic formulae for the blockdiagonalized scattering matrix by the classification of the two body scattering states using the conserved quantum numbers at high energies. Imposing the condition of perturbative unitarity to the eigenvalues of the scattering matrix, constraints on the model parameters can be obtained. We apply our results to constrain the mass range of the nexttolightest Higgs state in the model.
Estimating Random Delays in Modbus Network Using Experiments and General Linear Regression Neural Networks with Genetic Algorithm Smoothing ; Timevarying delays adversely affect the performance of networked control systems NCS and in the worstcase can destabilize the entire system. Therefore, modelling network delays is important for designing NCS. However, modelling timevarying delays is challenging because of their dependence on multiple parameters such as length, contention, connected devices, protocol employed, and channel loading. Further, these multiple parameters are inherently random and delays vary in a nonlinear fashion with respect to time. This makes estimating random delays challenging. This investigation presents a methodology to model delays in NCS using experiments and general regression neural network GRNN due to their ability to capture nonlinear relationship. To compute the optimal smoothing parameter that computes the best estimates, genetic algorithm is used. The objective of the genetic algorithm is to compute the optimal smoothing parameter that minimizes the mean absolute percentage error MAPE. Our results illustrate that the resulting GRNN is able to predict the delays with less than 3 error. The proposed delay model gives a framework to design compensation schemes for NCS subjected to timevarying delays.
CRDT Correlation Ratio Based Decision Tree Model for Healthcare Data Mining ; The phenomenal growth in the healthcare data has inspired us in investigating robust and scalable models for data mining. For classification problems Information GainIG based Decision Tree is one of the popular choices. However, depending upon the nature of the dataset, IG based Decision Tree may not always perform well as it prefers the attribute with more number of distinct values as the splitting attribute. Healthcare datasets generally have many attributes and each attribute generally has many distinct values. In this paper, we have tried to focus on this characteristics of the datasets while analysing the performance of our proposed approach which is a variant of Decision Tree model and uses the concept of Correlation RatioCR. Unlike IG based approach, this CR based approach has no biasness towards the attribute with more number of distinct values. We have applied our model on some benchmark healthcare datasets to show the effectiveness of the proposed technique.
The NonMinimal Ekpyrotic Trispectrum ; Employing the covariant formalism, we derive the evolution equations for two scalar fields with noncanonical field space metric up to third order in perturbation theory. These equations can be used to derive predictions for local bi and trispectra of multifield cosmological models. Our main application is to ekpyrotic models in which the primordial curvature perturbations are generated via the nonminimal entropic mechanism. In these models, nearly scaleinvariant entropy perturbations are generated first due to a nonminimal kinetic coupling between two scalar fields, and subsequently these perturbations are converted into curvature perturbations. Remarkably, the entropy perturbations have vanishing bi and trispectra during the ekpyrotic phase. However, as we show, the conversion process to curvature perturbations induces local nonGaussianity parameters fNL and gNL at levels that should be detectable by nearfuture observations. In fact, in order to obtain a large enough amplitude and small enough bispectrum of the curvature perturbations, as seen in current measurements, the conversion process must be very efficient. Interestingly, for such efficient conversions the trispectrum parameter gNL remains negative and typically of a magnitude cal O102 cal O103, resulting in a distinguishing feature of nonminimally coupled ekpyrotic models.
Estimation and inference in generalized additive coefficient models for nonlinear interactions with highdimensional covariates ; In the lowdimensional case, the generalized additive coefficient model GACM proposed by Xue and Yang Statist. Sinica 16 2006 14231446 has been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables. In this paper, we propose estimation and inference procedures for the GACM when the dimension of the variables is high. Specifically, we propose a groupwise penalization based procedure to distinguish significant covariates for the large p small n setting. The procedure is shown to be consistent for model structure identification. Further, we construct simultaneous confidence bands for the coefficient functions in the selected model based on a refined twostep spline estimator. We also discuss how to choose the tuning parameters. To estimate the standard deviation of the functional estimator, we adopt the smoothed bootstrap method. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze an obesity data set from a genomewide association study as an illustration.
Old Bands, New TracksRevisiting the Band Model for Robust Hypothesis Testing ; The density band model proposed by Kassam for robust hypothesis testing is revisited in this paper. First, a novel criterion for the general characterization of least favorable distributions is proposed, which unifies existing results. This criterion is then used to derive an implicit definition of the least favorable distributions under band uncertainties. In contrast to the existing solution, it only requires two scalar values to be determined and eliminates the need for casebycase statements. Based on this definition, a generic fixedpoint algorithm is proposed that iteratively calculates the least favorable distributions for arbitrary band specifications. Finally, three different types of robust tests that emerge from band models are discussed and a numerical example is presented to illustrate their potential use in practice.
A Generic Microscopic Theory for the Universality of TTLS Model MeissnerBerret Ratio in LowTemperature Glasses ; Tunnelingtwolevelsystem TTLS model has successfully explained several lowtemperature glass universal properties which do not exist in their crystalline counterparts. The coupling constants between longitudinal and transverse phonon strain fields and twolevelsystems are denoted as gammal and gammat. The ratio gammalgammat was observed to lie between 1.44 and 1.84 for 18 different kinds of glasses. Such universal property cannot be explained within TTLS model. In this paper by developing a microscopic generic coupled block model, we show that the ratio gammalgammat is proportinal to the ratio of sound velocity clct. We prove that the universality of gammalgammat essentially comes from the mutual interaction between different glass blocks, independent of the microscopic structure and chemical compound of the amorphous materials. In the appendix we also give a detailed correction on the coefficient of nonelastic stressstress interaction Lambdaijklss' which was obtained by Joffrin and LevelutciteJoffrin1976.
Derivative interactions and perturbative UV contributions in N Higgs Doublet Models ; We study the Higgs derivative interactions on models including arbitrary number of the Higgs doublets. These interactions are generated by two ways. One is higher order corrections of composite Higgs models, and the other is integrating out heavy scalars and vectors. In the latter case, three point couplings between the Higgs doublets and them are the sources of the derivative interactions. The representations of these heavy particles are constrained to couple with the doublets. We explicitly calculate the all derivative interactions generated by integrating out. Their degrees of freedom and conditions to impose the custodial symmetry are discussed. We also study the vector boson scattering processes with a couple of two Higgs doublet models to see experimental signals of the derivative interactions. They are differently affected by each heavy field.
Transfer Operators, Induced Probability Spaces, and Random Walk Models ; We study a family of discretetime randomwalk models. The starting point is a fixed generalized transfer operator R subject to a set of axioms, and a given endomorphism in a compact Hausdorff space X. Our setup includes a host of models from applied dynamical systems, and it leads to general pathspace probability realizations of the initial transfer operator. The analytic data in our construction is a pair lefth,lambdaright, where h is an Rharmonic function on X, and lambda is a given positive measure on X subject to a certain invariance condition defined from R. With this we show that there are then discretetime randomwalk realizations in explicit pathspace models; each associated to a probability measures mathbbP on pathspace, in such a way that the initial data allows for spectral characterization The initial endomorphism in X lifts to an automorphism in pathspace with the probability measure mathbbP quasiinvariant with respect to a shift automorphism. The latter takes the form of explicit multiresolutions in L2 of mathbbP in the sense of LaxPhillips scattering theory.
Thermodynamic analysis of nonlinear ReissnerNordstrom black holes ; In the present article we study the Inverse Electrodynamics Model. This model is a gauge and parity invariant nonlinear Electrodynamics theory, which respects the conformal invariance of standard Electrodynamics. This modified Electrodynamics model, when minimally coupled to General Relativity, is compatible with static and spherically symmetric ReissnerNordstromlike blackhole solutions. However, these blackhole solutions present more complex thermodynamic properties than their ReissnerNordstrom blackhole solutions counterparts in standard Electrodynamics. In particular, in the Inverse Model a new stability region, with both the heat capacity and the free energy negative, arises. Moreover, unlike the scenario in standard Electrodynamics, a sole transition phase is possible for a suitable choice in the set of parameters of these solutions.
Implementing inverse seesaw mechanism in SU3xSU4xU1 gauge models ; Generating appropriate tiny neutrino masses via inverse seesaw mechanism within the framework of a particular SU3xSU4xU1 gauge model is the main outcome of this letter. It is achieved by simply adding three singlet exotic Majorana neutrinos to the usual ones included in the three lepton quadruplet representations. The theoretical device of treating gauge models with high symmetries is the general method by Cotaescu. It provides us with a unique free parameter a to be tuned in order to get a realistic mass spectrum for the gauge bosons and charged fermions in the model. The overall breaking scale can be set around 110 TeV so its phenomenology is quite testable at present facilities.
Learning in the Rational Speech Acts Model ; The Rational Speech Acts RSA model treats language use as a recursive process in which probabilistic speaker and listener agents reason about each other's intentions to enrich the literal semantics of their language along broadly Gricean lines. RSA has been shown to capture many kinds of conversational implicature, but it has been criticized as an unrealistic model of speakers, and it has so far required the manual specification of a semantic lexicon, preventing its use in natural language processing applications that learn lexical knowledge from data. We address these concerns by showing how to define and optimize a trained statistical classifier that uses the intermediate agents of RSA as hidden layers of representation forming a nonlinear activation function. This treatment opens up new application domains and new possibilities for learning effectively from data. We validate the model on a referential expression generation task, showing that the best performance is achieved by incorporating features approximating wellestablished insights about natural language generation into RSA.
Parametrized measure models ; We develope a new and general notion of parametric measure models and statistical models on an arbitrary sample space Omega which does not assume that all measures of the model have the same null sets. This is given by a diffferentiable map from the parameter manifold M into the set of finite measures or probability measures on Omega, respectively, which is differentiable when regarded as a map into the Banach space of all signed measures on Omega. Furthermore, we also give a rigorous definition of roots of measures and give a natural definition of the Fisher metric and the AmariChentsov tensor as the pullback of tensors defined on the space of roots of measures. We show that many features such as the preservation of this tensor under sufficient statistics and the monotonicity formula hold even in this very general setup.
Regression for citation data An evaluation of different methods ; Citations are increasingly used for research evaluations. It is therefore important to identify factors affecting citation scores that are unrelated to scholarly quality or usefulness so that these can be taken into account. Regression is the most powerful statistical technique to identify these factors and hence it is important to identify the best regression strategy for citation data. Citation counts tend to follow a discrete lognormal distribution and, in the absence of alternatives, have been investigated with negative binomial regression. Using simulated discrete lognormal data continuous lognormal data rounded to the nearest integer this article shows that a better strategy is to add one to the citations, take their log and then use the general linear ordinary least squares model for regression e.g., multiple linear regression, ANOVA, or to use the generalised linear model without the log. Reasonable results can also be obtained if all the zero citations are discarded, the log is taken of the remaining citation counts and then the general linear model is used, or if the generalised linear model is used with the continuous lognormal distribution. Similar approaches are recommended for altmetric data, if it proves to be lognormally distributed.
Trajectory based models. Evaluation of minmax pricing bounds ; The paper studies sub and superreplication price bounds for contingent claims defined on general trajectory based market models. No prior probabilistic or topological assumptions are placed on the trajectory space, trading is assumed to take place at a finite number of occasions but not bounded in number nor necessarily equally spaced in time. For a given option, there exists an interval bounding the set of possible fair prices; such interval exists under more general conditions than the usual noarbitrage requirement. The paper develops a backward recursive method to evaluate the option bounds; the global minmax optimization, defining the price interval, is reduced to a local minmax optimization via dynamic programming. Trajectory sets are introduced for which existing nonprobabilistic markets models are nested as a particular case. Several examples are presented, the effect of the presence of arbitrage on the price bounds is illustrated.
The Imprint of fR Gravity on NonLinear Structure Formation ; We test the imprint of fR modified gravity on the halo mass function, using Nbody simulations and a theoretical model developed in Kopp et al. 2013. We find a very good agreement between theory and simulations. We extend the theoretical model to the conditional mass function and apply it to the prediction of the linear halo bias in fR gravity. Using the halo model we obtain a prediction for the nonlinear matter power spectrum accurate to 10 at z0 and up to k2hMpc. We also study halo profiles for the fR models and find a deviation from the standard general relativity result up to 40, depending on the halo masses and redshift. This has not been pointed out in previous analysis. Finally we study the number density and profiles of voids identified in these fR Nbody simulations. We underline the effect of the bias and the sampling to identify voids. We find significant deviation from GR when measuring the fR void profiles with fR0106.
Visual Language Modeling on CNN Image Representations ; Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network CNN based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on highlevel features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pretrained CNN representations. Our method is applied to two tasks, demonstrating that 1 using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2 unnaturalness maps produced by our method achieve stateoftheart eye fixation prediction performance on two wellstudied datasets.
Inflow Generated Xray Corona Around Supermassive Black Holes and Unified Model for Xray Emission ; Threedimensional hydrodynamic simulations, covering the spatial domain from hundreds of Schwarzschild radii to 2 mathrmpc around the central supermassive black hole of mass 108 Modot, with detailed radiative cooling processes, are performed. Generically found is the existence of a significant amount of shock heated, high temperature geq 108 mathrmK coronal gas in the inner leq 104 rmathrmsch region. It is shown that the composite bremsstrahlung emission spectrum due to coronal gas of various temperatures are in reasonable agreement with the overall ensemble spectrum of AGNs and hard Xray background. Taking into account inverse Compton processes, in the context of the simulationproduced coronal gas, our model can readily account for the wide variety of AGN spectral shape, which can now be understood physically. The distinguishing feature of our model is that Xray coronal gas is, for the first time, an integral part of the inflow gas and its observable characteristics are physically coupled to the concomitant inflow gas. One natural prediction of our model is the anticorrelation between accretion disk luminosity and spectral hardness as the luminosity of SMBH accretion disk decreases, the hard Xray luminosity increases relative to the UVoptical luminosity.
Dynamic Sum Product Networks for Tractable Inference on Sequence Data Extended Version ; SumProduct Networks SPN have recently emerged as a new class of tractable probabilistic graphical models. Unlike Bayesian networks and Markov networks where inference may be exponential in the size of the network, inference in SPNs is in time linear in the size of the network. Since SPNs represent distributions over a fixed set of variables only, we propose dynamic sum product networks DSPNs as a generalization of SPNs for sequence data of varying length. A DSPN consists of a template network that is repeated as many times as needed to model data sequences of any length. We present a local search technique to learn the structure of the template network. In contrast to dynamic Bayesian networks for which inference is generally exponential in the number of variables per time slice, DSPNs inherit the linear inference complexity of SPNs. We demonstrate the advantages of DSPNs over DBNs and other models on several datasets of sequence data.
A Generalized Probability Framework to Model Economic Agents' Decisions Under Uncertainty ; The applications of techniques from statistical and classical mechanics to model interesting problems in economics and finance has produced valuable results. The principal movement which has steered this research direction is known under the name of econophysics'. In this paper, we illustrate and advance some of the findings that have been obtained by applying the mathematical formalism of quantum mechanics to model human decision making under uncertainty' in behavioral economics and finance. Starting from Ellsberg's seminal article, decision making situations have been experimentally verified where the application of Kolmogorovian probability in the formulation of expected utility is problematic. Those probability measures which by necessity must situate themselves in Hilbert space such as quantum probability' enable a faithful representation of experimental data. We thus provide an explanation for the effectiveness of the mathematical framework of quantum mechanics in the modeling of human decision making. We want to be explicit though that we are not claiming that decision making has microscopic quantum mechanical features.
On the Generalization Error Bounds of Neural Networks under DiversityInducing Mutual Angular Regularization ; Recently diversityinducing regularization methods for latent variable models LVMs, which encourage the components in LVMs to be diverse, have been studied to address several issues involved in latent variable modeling 1 how to capture longtail patterns underlying data; 2 how to reduce model complexity without sacrificing expressivity; 3 how to improve the interpretability of learned patterns. While the effectiveness of diversityinducing regularizers such as the mutual angular regularizer has been demonstrated empirically, a rigorous theoretical analysis of them is still missing. In this paper, we aim to bridge this gap and analyze how the mutual angular regularizer MAR affects the generalization performance of supervised LVMs. We use neural network NN as a model instance to carry out the study and the analysis shows that increasing the diversity of hidden units in NN would reduce estimation error and increase approximation error. In addition to theoretical analysis, we also present empirical study which demonstrates that the MAR can greatly improve the performance of NN and the empirical observations are in accordance with the theoretical analysis.
Modelling aperiodic Xray variability in black hole binaries as propagating mass accretion rate fluctuations a short review ; Black hole binary systems can emit very bright and rapidly varying Xray signals when material from the companion accretes onto the black hole, liberating huge amounts of gravitational potential energy. Central to this process of accretion is turbulence. In the propagating mass accretion rate fluctuations model, turbulence is generated throughout the inner accretion flow, causing fluctuations in the accretion rate. Fluctuations from the outer regions propagate towards the black hole, modulating the fluctuations generated in the inner regions. Here, I present the theoretical motivation behind this picture before reviewing the array of statistical variability properties observed in the light curves of black hole binaries that are naturally explained by the model. I also discuss the remaining challenges for the model, both in terms of comparison to data and in terms of including more sophisticated theoretical considerations.
Free energy in the Potts spin glass ; We study the Potts spin glass model, which generalizes the SherringtonKirkpatrick model to the case when spins take more than two values but their interactions are counted only if the spins are equal. We obtain the analogue of the Parisi variational formula for the free energy, with the order parameter now given by a monotone path in the set of positivesemidefinite matrices. The main idea of the paper is a novel synchronization mechanism for blocks of overlaps. This mechanism can be used to solve a more general version of the SherringtonKirkpatrick model with vector spins interacting through their scalar product, which includes the Potts spin glass as a special case. As another example of application, one can show that Talagrand's bound for multiple copies of the mixed pspin model with constrained overlaps is asymptotically sharp. We consider these problems in the subsequent paper, arXiv1512.04441, and illustrate the main new idea on the technically more transparent case of the Potts spin glass.
On improving analytical models of cosmic reionization for matching numerical simulation ; The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.
New DBI Inflation model with Kinetic Coupling to Einstein Gravity ; In this paper we study a new class of inflation models which generalize the DiracBornInfeld DBI action with the addition of a nonminimal kinetic coupling NKC term. We dubbed this model as the it new DBI inflation model. The NKC term does not bring new dynamical degree of freedom, so the equations of motion remain of second order. However, with such a coupling, the action is no longer linear with respect to the Einstein curvature term R or Gmunu, which leads to a correction term of k4 in the perturbations. The new DBI inflation model can be viewed as theories beyond Horndeski. Without violating nearly scaleinvariance, such a correction may lead to new effects on the inflationary spectra that could be tested by future observations.
Superfluid Phase Transition with Activated Velocity Fluctuations Renormalization Group Approach ; A quantum field model that incorporates Bosecondensed systems near their phase transition into a superfluid phase and velocity fluctuations is proposed. The stochastic NavierStokes equation is used for a generation of the velocity fluctuations. As such this model generalizes model F of critical dynamics. The fieldtheoretic action is derived using the MartinSiggiaRose formalism and path integral approach. The regime of equilibrium fluctuations is analyzed within perturbative renormal ization group method. The double epsilon ,deltaexpansion scheme is employed, where is a deviation from space dimension 4 and delta describes scaling of velocity fluctuations. The renormalization procedure is performed to the leading order. The main corollary gained from the analysis of the thermal equilibrium regime suggests that oneloop calculations of the presented models are not sufficient to make a definite conclusion about the stability of fixed points. We also show that critical exponents are drastically changed as a result of the turbulent background and critical fluctuations are in fact destroyed by the developed turbulence fluctuations. The scaling exponent of effective viscosity is calculated and agrees with expected value 43.
Weyl Anomaly and Initial Singularity Crossing ; We consider the role of quantum effects, mainly, Weyl anomaly in modifying FLRW model singular behavior at early times. Weyl anomaly corrections to FLRW models have been considered in the past, here we reconsider this model and show the following The singularity of this model is weak according to Tipler and Krolak, therefore, the spacetime might admit a geodesic extension. Weyl anomaly corrections changes the nature of the initial singularity from a big bang singularity to a sudden singularity. The two branches of solutions consistent with the semiclassical treatment form a disconnected manifold. Joining these two parts at the singularity provides us with a C1 extension to nonspacelike geodesics and leaves the spacetime geodesically complete. Using GaussCodazzi equations one can derive generalized junction conditions for this higherderivative gravity. The extended spacetime obeys Friedmann and Raychaudhuri equations and the junction conditions. The junction does not generate Dirac delta functions in matter sources which keeps the equation of state unchanged.
Right Bousfield Localization and Operadic Algebras ; It is well known that under some general conditions right Bousfield localization exists. We provide general conditions under which right Bousfield localization yields a monoidal model category. Then we address the questions of when this monoidal model structure on a right Bousfield localization induces a model structure on the category of algebras over a colored operad and when a right Bousfield localization preserves colored operadic algebras. We give numerous applications, to topological spaces, equivariant spaces, chain complexes, stable module categories, and to the category of small categories. We recover a wide range of classical results as special cases of our theory, and prove several new preservation results.
Neutrino Catalyzed Diphoton Excess ; In this paper we explain the 750 GeV diphoton resonance observed at the run2 LHC as a scalar singlet S, that plays a key rule in generating tiny but nonzero Majorana neutrino masses. The model contains four electroweak singlets two leptoquarks, a singly charged scalar and a neutral scalar S. Majorana neutrino masses might be generated at the twoloop level as S get nonzero vacuum expectation value. S can be produced at the LHC through the gluon fusion and decays into diphoton at the oneloop level with charged scalars running in the loop. The model fits perfectly with a wide width of the resonance. Constraints on the model are investigated, which shows a negligible mixing between the resonance and the standard model Higgs boson.
Interdependent Relationships in Game Theory A Generalized Model ; A generalized model of games is proposed, in which cooperative games and noncooperative games are special cases. Some games that are neither cooperative nor noncooperative can be expressed and analyzed. The model is based on relationships and supposed relationships between players. A relationship is a numerical value that denotes how one player cares for the payoffs of another player, while a supposed relationship is another numerical value that denotes a player's belief about the relationship between two players. The players choose their strategies by taking into consideration not only the material payoffs but also relationships and their change. Two games, a prisoners' dilemma and a repeated ultimatum game, are analyzed as examples of application of this model.
Multitask CNN Model for Attribute Prediction ; This paper proposes a joint multitask learning algorithm to better predict attributes in images using deep convolutional neural networks CNN. We consider learning binary semantic attributes through a multitask CNN model, where each CNN will predict one binary attribute. The multitask learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attributespecific feature representations, and then we apply multitask learning on the features to predict their attributes. In our multitask framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, undersampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.
Scalable Models for Computing Hierarchies in Information Networks ; Information hierarchies are organizational structures that often used to organize and present large and complex information as well as provide a mechanism for effective human navigation. Fortunately, many statistical and computational models exist that automatically generate hierarchies; however, the existing approaches do not consider linkages in information em networks that are increasingly common in realworld scenarios. Current approaches also tend to present topics as an abstract probably distribution over words, etc rather than as tangible nodes from the original network. Furthermore, the statistical techniques present in many previous works are not yet capable of processing data at Webscale. In this paper we present the Hierarchical Document Topic Model HDTM, which uses a distributed vertexprogramming process to calculate a nonparametric Bayesian generative model. Experiments on three medium size data sets and the entire Wikipedia dataset show that HDTM can infer accurate hierarchies even over large information networks.