text
stringlengths
62
2.94k
A Layman's guide to SUSY GUTs ; The determination of the most straightforward evidence for the existence of the Superworld requires a guide for nonexperts especially experimental physicists for them to make their own judgement on the value of such predictions. For this purpose we review the most basic results of SuperGrand unification in a simple and clear way. We focus the attention on two specific models and their predictions. These two models represent an example of a direct comparison between a traditional unifiedtheory and a stringinspired approach to the solution of the many open problems of the Standard Model. We emphasize that viable models must satisfy em all available experimental constraints and be as simple as theoretically possible. The two well defined supergravity models, SU5 and SU5times U1, can be described in terms of only a few parameters five and three respectively instead of the more than twenty needed in the MSSM model, ie, the Minimal Supersymmetric extension of the Standard Model. A case of special interest is the strict noscale SU5times U1 supergravity where all predictions depend on only one parameter plus the topquark mass. A general consequence of these analyses is that supersymmetric particles can be at the verge of discovery, lurking around the corner at present and near future facilities. This review should help anyone distinguish between well motivated predictions and predictions based on arbitrary choices of parameters in undefined models.
Proton decay and realistic models of quark and lepton masses ; It is shown that in realistic SUSY GUT models of quark and lepton masses both the proton decay rate and branching ratios differ in general from those predicted in the minimal SU5 supersymmetric model. The observation of proton decay, and in particular the branching ratio Bp rightarrow pi overlinenup rightarrow K overlinenu, would thus allow decisive tests of these fermion mass schemes. It is shown that the charged lepton decay modes p rightarrow K0 mu, p rightarrow K0 e arising through gluino dressing diagrams are significant and comparable to the neutrino modes in large tanbeta models. Moreover, it is found that in certain classes of models the Higgsinomediated proton decay amplitudes are proportional to a modeldependent grouptheoretical factor which in some cases can be quite small. The most interesting such class consists of SO10 models in which the dominant flavorsymmetric contribution to the upquark mass matrix comes from an effective operator of the form bf 16ibf 16j bf 10H bf 45H, where langle bf 45H rangle points approximately in the I3R direction. This class includes a recent model of quark and lepton masses proposed by the authors.
The CPconserving twoHiggsdoublet model the approach to the decoupling limit ; A CPeven neutral Higgs boson with StandardModellike couplings may be the lightest scalar of a twoHiggsdoublet model. We study the decoupling limit of the most general CPconserving twoHiggsdoublet model, where the mass of the lightest Higgs scalar is significantly smaller than the masses of the other Higgs bosons of the model. In this case, the properties of the lightest Higgs boson are nearly indistinguishable from those of the Standard Model Higgs boson. The first nontrivial corrections to Higgs couplings in the approach to the decoupling limit are also evaluated. The importance of detecting such deviations in precision Higgs measurements at future colliders is emphasized. We also clarify the case in which a neutral Higgs boson can possess StandardModellike couplings in a regime where the decoupling limit does not apply. The twoHiggsdoublet sector of the minimal supersymmetric model illustrates many of the above features.
ConstituentQuark Model and New Particles ; An elementary constituentquark CQ model by Mac Gregor is reviewed with currently published data from light meson spectroscopy. It was previously shown in the CQ model that there existed several mass quanta m 70 MeV, B 140 MeV and X 420 MeV, which were responsible for the quantization of meson yrast levels. The existence of a 70MeV quantum was postulated by Mac Gregor and was shown to fit the Nambu empirical mass formula mn n2137me, n a positive integer. The 70MeV quantum can be derived in three different ways 1 pure electric coupling, 2 pure magnetic coupling, and 3 mixed electric and magnetic charges dyons. Schwinger first introduced dyons in a magnetic model of matter. It is shown in this paper that recent data of new light mesons fit into the CQ model a pure electric model without the introduction of magnetic charges. However, by introducing electric and magnetic quarks dyons into the CQ model, new dynamical forces can be generated by the presence of magnetic fields internal to the quarks dyons. The laws of angular momentum and of energy conservation are valid in the presence of magnetic charge. With the introduction of the RussellSaunders coupling scheme into the CQ model, several new meson particles are predicted to exist. The existence of the f0560 meson is predicted and is shown to fit current experimental data from the Particle Data Group listing. The existence of meson partners or groupings is shown.
Genetic Algorithms and Experimental Discrimination of SUSY Models ; We introduce genetic algorithms as a means to estimate the accuracy required to discriminate among different models using experimental observables. We exemplify the technique in the context of the minimal supersymmetric standard model. If supersymmetric particles are discovered, models of supersymmetry breaking will be fit to the observed spectrum and it is beneficial to ask beforehand what accuracy is required to always allow the discrimination of two particular models and which are the most important masses to observe Each model predicts a bounded patch in the space of observables once unknown parameters are scanned over. The questions can be answered by minimising a distance measure between the two hypersurfaces. We construct a distance measure that scales like a constant fraction of an observable. Genetic algorithms, including concepts such as natural selection, fitness and mutations, provide a solution to the minimisation problem. We illustrate the efficiency of the method by comparing three different classes of string models for which the above questions could not be answered with previous techniques. The required accuracy is in the range accessible to the Large Hadron Collider LHC when combined with a future linear collider LC facility. The technique presented here can be applied to more general classes of models or observables.
WMAP constraints on SUGRA models with nonuniversal gaugino masses and prospects for direct detection ; We discuss constraints on supersymmetric models arising from the relic density measurements of WMAP as well as from direct and precision measurements, LEP, bsgamma, g2, Bsmumu. We consider mSUGRA models and their extensions with nonuniversal gaugino masses. We find, as commonly known, that the relic density pinpoints towards very specific regions of the mSUGRA models coannihilation, focus and Heavy Higgs annihilation. The allowed regions widen significantly when varying the top quark mass. Introducing some nonuniversality in the gaugino masses significantly changes this conclusion as in specific nonuniversal models the relic density upper limit can be easily satisfied. This occurs with models where M1M2 at the GUT scale when the LSP has a high wino component. Models where M3M2 favours the Higgs annihilation channel in large regions of parameter space and at large tanbeta also favours the annihilation of neutralinos in gauge bosons pairs.We discuss also the potential of direct detection experiments to probe supersymmetric models and point out at the main consequences for colliders based on the mass spectrum. Our calculation of the relic density of neutralinos is based on micrOMEGAs and the SUSY spectrum is generated with SoftSUSY
Quarks with Integer Electric Charge ; Within the context of the Standard Model, quarks are placed in a mathbf3,mathbf2oplus mathbf3,barmathbf2 matter field representation of UEW2. Although the quarks carry unit intrinsic electric charge in this construction, anomaly cancellation constrains the Lagrangian in such a way that the quarks' associated currents couple to the photon with the usual 23 and 13 fractional electric charge associated with conventional quarks. The resulting model is identical to the Standard Model in the SUC3 sector However, in the UEW2 sector it is similar but not necessarily equivalent. Off hand, the model appears to be phenomenologically equivalent to the conventional quark model in the electroweak sector for experimental conditions that preclude observation of individual constituent currents. On the other hand, it is conceivable that detailed analyses for electroweak reactions may reveal discrepancies with the Standard Model in high energy andor large momentum transfer reactions. The possibility of quarks with integer electric charge strongly suggests the notion that leptons and quarks are merely different manifestations of the same underlying field. A speculative model is proposed in which a phase transition is assumed to occur between SUC3otimes UEM1 and UEM1 regimes. This immediately; explains the equality of leptonquark generations and leptonhadron electric charge, relates neutrino oscillations to quark flavor mixing, reduces the free parameters of the Standard Model, and renders the issue of quark confinement moot.
Environmentally Friendly Renormalization ; We analyze the renormalization of systems whose effective degrees of freedom are described in terms of fluctuations which are environment'' dependent. Relevant environmental parameters considered are temperature, system size, boundary conditions, and external fields. The points in the space of lqlq coupling constants'' at which such systems exhibit scale invariance coincide only with the fixed points of a global renormalization group which is necessarily environment dependent. Using such a renormalization group we give formal expressions to two loops for effective critical exponents for a generic crossover induced by a relevant mass scale g. These effective exponents are seen to obey scaling laws across the entire crossover, including hyperscaling, but in terms of an effective dimensionality, def4gl, which represents the effects of the leading irrelevant operator. We analyze the crossover of an ON model on a d dimensional layered geometry with periodic, antiperiodic and Dirichlet boundary conditions. Explicit results to two loops for effective exponents are obtained using a 2,1 Pad'e resummed coupling, for the Gaussian model'' N2, spherical model Ninfty, Ising Model N1, polymers N0, XYmodel N2 and Heisenberg N3 models in four dimensions. We also give two loop Pad'e resummed results for a three dimensional Ising ferromagnet in a transverse magnetic field and corresponding one loop results for the two dimensional model. One loop results are also presented for a three dimensional layered Ising model with Dirichlet and antiperiodic boundary conditions. Asymptotically the effective exponents are in excellent agreement with known results.
Grassmannian Topological KazamaSuzuki Models and Cohomology ; We investigate in detail the topological gauged WessZuminoWitten models describing topological KazamaSuzuki models based on complex Grassmannians. We show that there is a topological sector in which the ring of observables constructed from the Grassmann odd scalars of the theory coincides with the classical cohomology ring of the Grassmannian for all values of the level k. We perform a detailed analysis of the nontrivial topological sectors arising from the adjoint gauging, and investigate the general ring structure of bosonic correlation functions, uncovering a whole hierarchy of levelrank relations including the standard levelrank duality among models based on different Grassmannians. Using the previously established localization of the topological KazamaSuzuki model to an Abelian topological field theory, we reduce the correlators to finitedimensional purely algebraic expressions. As an application, these are evaluated explicitly for the CP2 model at level k and shown for all k to coincide with the cohomological intersection numbers of the twoplane Grassmannian G2,k2, thus realizing the levelrank duality between this model and the G2,k2 model at level one.
Scales of Gravity ; We propose a framework in which the quantum gravity scale can be as low as 103 eV. The key assumption is that the Standard Model ultraviolet cutoff is much higher than the quantum gravity scale. This ensures that we observe conventional weak gravity. We construct an explicit braneworld model in which the branelocalized Standard Model is coupled to strong 5D gravity of infinitevolume flat extra space. Due to the high ultraviolet scale, the Standard Model fields generate a large graviton kinetic term on the brane. This kinetic term shields'' the Standard Model from the strong bulk gravity. As a result, an observer on the brane sees weak 4D gravity up to astronomically large distances beyond which gravity becomes fivedimensional. Modeling quantum gravity above its scale by the closed string spectrum we show that the shielding phenomenon protects the Standard Model from an apparent phenomenological catastrophe due to the exponentially large number of light string states. The collider experiments, astrophysics, cosmology and gravity measurements it independently point to the same lower bound on the quantum gravity scale, 103 eV. For this value the model has experimental signatures both for colliders and for submillimeter gravity measurements. Black holes reveal certain interesting properties in this framework.
Liouville field theory coupled to a critical Ising model Nonperturbative analysis, duality and applications ; Two different kinds of interactions between a Znparafermionic and a Liouville field theory are considered. For generic values of n, the effective central charges describing the UV behavior of both models are calculated in the NeveuSchwarz sector. For n2 exact vacuum expectation values of primary fields of the Liouville field theory, as well as the first descendent fields are proposed. For n1, known results for SinhGordon and BulloughDodd models are recovered whereas for n2, exact results for these two integrable coupled IsingLiouville models are shown to exchange under a weakstrong coupling duality relation. In particular, exact relations between the parameters in the actions and the mass of the particles are obtained. At specific imaginary values of the coupling and n2, we use previous results to obtain exact information about a Integrable coupled models like Isingcal Mpp', homogeneous sineGordon model SU32 or the IsingXY model; b NeveuSchwarz sector of the Phi13 integrable perturbation of N1 supersymmetric minimal models. Several nonperturbative checks are done, which support the exact results.
Exact Standard model Structures from Intersecting D5Branes ; We discuss the appearance of nonsupersymmetric compactifications with exactly the Standard Model SM at low energies, in the context of IIB orientifold constructions with D5 branes intersecting at angles on the T4 tori, of the orientifold of T4 times C ZN. We discuss constructions where the Standard Model embedding is considering within four, five and six stacks of D5 branes. The appearance of the three generation observable Standard Model at low energies is accompanied by a gauged baryon number, thus ensuring automatic proton stability. Also, a compatibility with a low scale of order TeV is ensured by having a two dimensional space transverse to all branes. The present models complete the discussion of some recently constructed four stack models of D5 branes with the SM at low energy. By embedding the four, five and six stack Standard Model configurations into quiver diagrams, deforming them around the QCD intersection numbers, we find a rich variety of vacua that may have exactly the Standard Model at low energy. Also by using brane recombination on the U1's, we show that the five and six vacua flow into their four stack counterparts. Thus string vacua with five and six stack deformations are continuously connected to the four stack vacua.
Supersymmetric CalogeroMoserSutherland models superintegrability structure and eigenfunctions ; We first review the construction of the supersymmetric extension of the quantum CalogeroMoserSutherland CMS models. We stress the remarkable fact that this extension is completely captured by the insertion of a fermionic exchange operator in the Hamiltonian sCMS models it s for supersymmetric are nothing but special exchangetype CMS models. Under the appropriate projection, the conserved charges can thus be formulated in terms of the standard Dunkl operators. This is illustrated in the rational case, where the explicit form of the 4N N being the number of bosonic variables conserved charges is presented, together with their full algebra. The existence of 2N commuting bosonic charges settles the question of the integrability of the srCMS model. We then prove its superintegrability by displaying 2N2 extra independent charges commuting with the Hamiltonian. In the second part, we consider the supersymmetric version of the trigonometric case stCMS model and review the construction of its eigenfunctions, the Jack superpolynomials. This leads to closedform expressions, as determinants of determinants involving supermonomial symmetric functions. Here we focus on the main ideas and the generic aspects of the construction those applicable to all models whether supersymmetric or not. Finally, the possible Lie superalgebraic structure underlying the stCMS model and its eigenfunctions is briefly considered.
Standardlike Models as Type IIB Flux Vacua ; We construct new semirealistic Type IIB flux vacua on Z2times Z2 orientifolds with three and four Standard Model SM families and up to three units of quantized flux. The openstring sector is comprised of magnetized Dbranes and is Tdual to supersymmetric intersecting D6brane constructions. The SM sector contains magnetized D9branes with negative D3brane charge contribution. There are large classes of such models and we present explicit constructions for representative ones. In addition to models with one and two units of quantized flux, we also construct the first three and fourfamily Standardlike models with supersymmetric fluxes, i.e. comprising three units of quantized flux. Supergravity fluxes are due to the selfdual NSNS and RR threeform field strength and they fix the toroidal complex structure moduli and the dilaton. The supersymmetry conditions for the Dbrane sector fix in some models all three toroidal Kahler moduli. We also provide examples where toroidal K ahler moduli are fixed by strong gauge dynamics on the hidden sector'' D7brane. Most of the models possess Higgs doublet pairs with Yukawa couplings that can generate masses for quarks and leptons. The models have mainly right chiral exotics.
Ghost free dual vector theories in 21 dimensions ; We explore here the issue of duality versus spectrum equivalence in abelian vector theories in 21 dimensions. Specifically we examine a generalized selfdual GSD model where a Maxwell term is added to the selfdual model. A gauge embedding procedure applied to the GSD model leads to a MaxwellChernSimons MCS theory with higher derivatives. We show that the latter contains a ghost mode contrary to the original GSD model. On the other hand, the same embedding procedure can be applied to Nf fermions minimally coupled to the selfdual model. The dual theory corresponds to Nf fermions with an extra Thirring term coupled to the gauge field via a Paulilike term. By integrating over the fermions at Nftoinfty in both matter coupled theories we obtain effective quadratic theories for the corresponding vector fields. On one hand, we have a nonlocal type of the GSD model. On the other hand, we have a nonlocal form of the MCS theory. It turns out that both theories have the same spectrum and are ghost free. By figuring out why we do not have ghosts in this case we are able to suggest a new master action which takes us from the local GSD to a nonlocal MCS model with the same spectrum of the original GSD model and ghost free. Furthermore, there is a dual map between both theories at classical level which survives quantum correlation functions up to contact terms. The remarks made here may be relevant for other applications of the master action approach.
The primal framework. I ; This the first of a series of articles dealing with abstract classification theory. The apparatus to assign systems of cardinal invariants to models of a first order theory or determine its impossibility is developed in Sha. It is natural to try to extend this theory to classes of models which are described in other ways. Work on the classification theory for nonelementary classes Sh88 and for universal classes Sh300 led to the conclusion that an axiomatic approach provided the best setting for developing a theory of wider application. In the first chapter we describe the axioms on which the remainder of the article depends and give some examples and context to justify this level of generality. The study of universal classes takes as a primitive the notion of closing a subset under functions to obtain a model. We replace that concept by the notion of a prime model. We begin the detailed discussion of this idea in Chapter II. One of the important contributions of classification theory is the recognition that large models can often be analyzed by means of a family of small models indexed by a tree of height at most omega. More precisely, the analyzed model is prime over such a tree. Chapter III provides sufficient conditions for prime models over such trees to exist.
Nonequivalent Statistical Equilibrium Ensembles and Refined Stability Theorems for Most Probable Flows ; Statistical equilibrium models of coherent structures in twodimensional and barotropic quasigeostrophic turbulence are formulated using canonical and microcanonical ensembles, and the equivalence or nonequivalence of ensembles is investigated for these models. The main results show that models in which the global invariants are treated microcanonically give richer families of equilibria than models in which they are treated canonically. Such global invariants are those conserved quantities for ideal dynamics which depend on the large scales of the motion; they include the total energy and circulation. For each model a variational principle that characterizes its equilibrium states is derived by invoking large deviations techniques to evaluate the continuum limit of the probabilistic lattice model. An analysis of the two different variational principles resulting from the canonical and microcanonical ensembles reveals that their equilibrium states coincide only when the microcanonical entropy function is concave. These variational principles also furnish Lyapunov functionals from which the nonlinear stability of the mean flows can be deduced. While in the canonical model the wellknown Arnold stability theorems are reproduced, in the microcanonical model more refined theorems are obtained which extend known stability criteria when the microcanonical and canonical ensembles are not equivalent. A numerical example pertaining to geostrophic turbulence over topography in a zonal channel is included to illustrate the general results.
Wave energy localization by selffocusing in large molecular structures a damped stochastic discrete nonlinear Schroedinger equation model ; Wave selffocusing in molecular systems subject to thermal effects, such as thin molecular films and long biomolecules, can be modeled by stochastic versions of the Discrete SelfTrapping equation of Eilbeck, Lomdahl and Scott, and this can be approximated by continuum limits in the form of stochastic nonlinear Schroedinger equations. Previous studies directed at the SNLS approximations have indicated that the selffocusing of wave energy to highly localized states can be inhibited by phase noise modeling thermal effects and can be restored by phase damping modeling heat radiation. We show that the continuum limit is probably illposed in the presence of spatially uncorrelated noise, at least with little or no damping, so that discrete models need to be addressed directly. Also, as has been noted by other authors, omission of damping produces highly unphysical results. Numerical results are presented for the first time for the discrete models including the highly nonlinear damping term, and new numerical methods are introduced for this purpose. Previous conjectures are in general confirmed, and the damping is shown to strongly stabilize the highly localized states of the discrete models. It appears that the previously noted inhibition of nonlinear wave phenomena by noise is an artifact of modeling that includes the effects of heat, but not of heat loss.
Quark Condensates and MomentumDependent Quark Masses in a Nonlocal NambuJonaLasinio Model ; The NambuJonaLasinio NJL model has been extensively studied by many researchers. In previous work we have generalized the NJL model to include a covariant model of confinement. In the present work we consider further modification of the model so as to reproduce the type of Euclideanspace momentum dependent quark mass values obtained in lattice simulations of QCD. This may be done by introducing a nonlocal interaction, while preserving the chiral symmetry of the Lagrangian. In other work on nonlocal models, by other researchers, the momentum dependence of the quark selfenergy is directly related to the regularization scheme. In contrast, in our work, the regularization is independent of the nonlocality we introduce. It is of interest to note that the value of the condensate ratio, barssbarud, is about 1.7 when evaluated using chiral perturbation theory and is only about 1.1 in standard applications of the NJL model. We find that our nonlocal model can reproduce the larger value of the condensate ratio when reasonable values are used for the strength of the 't Hooft interaction. In an earlier study of the eta547 and eta'958 mesons, we found that use of the larger value of the condensate ratio led to a very good fit to the mixing angles and decay constants of these mesons. We also study the density dependence of both the quark condensate and the momentumdependent quark mass values. Without the addition of new parameters, we reproduce the density dependence of the condensate given by a wellknown model independent expression valid for small baryon density.
Deconfinement, naturalness and the nuclearquark equation of state ; Baryonloops vacuum contribution in renormalized models like the Linear sigma model and the Walecka model give rise to large unnatural interaction coefficients, indicating that the quantum vacuum is not adequately described by longrange degrees of freedom. We extend such models into nonrenormalizable class by introducing an ultraviolet cutoff into the model definition and treat the Diracsea explicitly. In this way, one can avoid unnaturalness. We calculate the equation of state for symmetric nuclear matter at zero temperature in a modified sigmaomega model. We show that the strong attraction originating from the Diracsea softens the nuclear matter equation of state and generates a vacuum with dynamically broken symmetry. In this model the vectormeson is important for the description of normal nuclear matter, but it obstructs the chiral phase transition. We investigate the chiral phase transition in this model by incorporating deconfinement at high density. A firstorder quark deconfinement is simulated by changing the active degrees of freedom from nucleons to quarks at high density. We show that the chiral phase transition is firstorder when quark decouples from the vectormeson and coincides with the deconfinement critical density.
Exactlysolvable models of proton and neutron interacting bosons ; We describe a class of exactlysolvable models of interacting bosons based on the algebra SO3,2. Each copy of the algebra represents a system of neutron and proton bosons in a given bosonic level interacting via a pairing interaction. The model that includes s and d bosons is a specific realization of the IBM2, restricted to the transition regime between vibrational and gammasoft nuclei. By including additional copies of the algebra, we can generate protonneutron boson models involving other boson degrees of freedom, while still maintaining exact solvability. In each of these models, we can study not only the states of maximal symmetry, but also those of mixed symmetry, albeit still in the vibrational to gammasoft transition regime. Furthermore, in each of these models we can study some features of Fspin symmetry breaking. We report systematic calculations as a function of the pairing strength for models based on s, d, and g bosons and on s, d, and f bosons. The formalism of exactlysolvable models based on the SO3,2 algebra is not limited to systems of proton and neutron bosons, however, but can also be applied to other scenarios that involve two species of interacting bosons.
Modeling extracellular field potentials and the frequencyfiltering properties of extracellular space ; Extracellular local field potentials LFP are usually modeled as arising from a set of current sources embedded in a homogeneous extracellular medium. Although this formalism can successfully model several properties of LFPs, it does not account for their frequencydependent attenuation with distance, a property essential to correctly model extracellular spikes. Here we derive expressions for the extracellular potential that include this frequencydependent attenuation. We first show that, if the extracellular conductivity is nonhomogeneous, there is induction of nonhomogeneous charge densities which may result in a lowpass filter. We next derive a simplified model consisting of a punctual or spherical current source with sphericallysymmetric conductivitypermittivity gradients around the source. We analyze the effect of different radial profiles of conductivity and permittivity on the frequencyfiltering behavior of this model. We show that this simple model generally displays lowpass filtering behavior, in which fast electrical events such as Namediated action potentials attenuate very steeply with distance, while slower Kmediated events propagate over larger distances in extracellular space, in qualitative agreement with experimental observations. This simple model can be used to obtain frequencydependent extracellular field potentials without taking into account explicitly the complex folding of extracellular space.
Intracellular transport by singleheaded kinesin KIF1A effects of singlemotor mechanochemistry and steric interactions ; In eukaryotic cells, many motor proteins can move simultaneously on a single microtubule track. This leads to interesting collective phenomena like jamming. Recently we reported it Phys. Rev. Lett. bf 95, 118101 2005 a latticegas model which describes traffic of unconventional singleheaded kinesins KIF1A. Here we generalize this model, introducing a novel interaction parameter c, to account for an interesting mechanochemical process which has not been considered in any earlier model. We have been able to extract all the parameters of the model, except c, from experimentally measured quantities. In contrast to earlier models of intracellular molecular motor traffic, our model assigns distinct chemical'' or, conformational states to each kinesin to account for the hydrolysis of ATP, the chemical fuel of the motor. Our model makes experimentally testable theoretical predictions. We determine the phase diagram of the model in planes spanned by experimentally controllable parameters, namely, the concentrations of kinesins and ATP. Furthermore, the phaseseparated regime is studied in some detail using analytical methods and simulations to determine e.g. the position of shocks. Comparison of our theoretical predictions with experimental results is expected to elucidate the nature of the mechanochemical process captured by the parameter c.
A composite model for DNA torsion dynamics ; DNA torsion dynamics is essential in the transcription process; a simple model for it, in reasonable agreement with experimental observations, has been proposed by Yakushevich Y and developed by several authors; in this, the DNA subunits made of a nucleoside and the attached nitrogen bases are described by a single degree of freedom. In this paper we propose and investigate, both analytically and numerically, a composite'' version of the Y model, in which the nucleoside and the base are described by separate degrees of freedom. The model proposed here contains as a particular case the Y model and shares with it many features and results, but represents an improvement from both the conceptual and the phenomenological point of view. It provides a more realistic description of DNA and possibly a justification for the use of models which consider the DNA chain as uniform. It shows that the existence of solitons is a generic feature of the underlying nonlinear dynamics and is to a large extent independent of the detailed modelling of DNA. The model we consider supports solitonic solutions, qualitatively and quantitatively very similar to the Y solitons, in a fully realistic range of all the physical parameters characterizing the DNA.
Networkbased analysis of stochastic SIR epidemic models with random and proportionate mixing ; In this paper, we outline the theory of epidemic percolation networks and their use in the analysis of stochastic SIR epidemic models on undirected contact networks. We then show how the same theory can be used to analyze stochastic SIR models with random and proportionate mixing. The epidemic percolation networks for these models are purely directed because undirected edges disappear in the limit of a large population. In a series of simulations, we show that epidemic percolation networks accurately predict the mean outbreak size and probability and final size of an epidemic for a variety of epidemic models in homogeneous and heterogeneous populations. Finally, we show that epidemic percolation networks can be used to rederive classical results from several different areas of infectious disease epidemiology. In an appendix, we show that an epidemic percolation network can be defined for any timehomogeneous stochastic SIR model in a closed population and prove that the distribution of outbreak sizes given the infection of any given node in the SIR model is identical to the distribution of its outcomponent sizes in the corresponding probability space of epidemic percolation networks. We conclude that the theory of percolation on semidirected networks provides a very general framework for the analysis of stochastic SIR models in closed populations.
Diameters in preferential attachment models ; In this paper, we investigate the diameter in preferential attachment PA models, thus quantifying the statement that these models are small worlds. The models studied here are such that edges are attached to older vertices proportional to the degree plus a constant, i.e., we consider affine PAmodels. There is a substantial amount of literature proving that, quite generally, PAgraphs possess powerlaw degree sequences with a powerlaw exponent tau2. We prove that the diameter of the PAmodel is bounded above by a constant times logt, where t is the size of the graph. When the powerlaw exponent tau exceeds 3, then we prove that logt is the right order, by proving a lower bound of this order, both for the diameter as well as for the typical distance. This shows that, for tau3, distances are of the order logt. For tauin 2,3, we improve the upper bound to a constant times loglogt, and prove a lower bound of the same order for the diameter. Unfortunately, this proof does not extend to typical distances. These results do show that the diameter is of order loglogt. These bounds partially prove predictions by physicists that the typical distance in PAgraphs are similar to the ones in other scalefree random graphs, such as the configuration model and various inhomogeneous random graph models, where typical distances have been shown to be of order loglogt when tauin 2,3, and of order logt when tau3.
Suppression of Higgsino mediated proton decay by cancellations in GUTs and strings ; A mechanism for the enhancement for proton lifetime in supersymmetricsupergravity SUSYSUGRA grand unified theories GUTs and in string theory models is discussed where Higgsino mediated proton decay arising from color triplets antitriplets with charges Q1313 and Q4343 is suppressed by an internal cancellation due to contributions from different sources. We exhibit the mechanism for an SU5 model with 45Hbar45H Higgs multiplets in addition to the usual Higgs structure of the minimal model. This model contains both Q1313 and Q4343 Higgs color triplets antitriplets and simple constraints allow for a complete suppression of Higgsino mediated proton decay. Suppression of proton decay in an SU5 model with Planck scale contributions is also considered. The suppression mechanism is then exhibited for an SO10 model with a unified Higgs structure involving 144Hbar144H representations.The SU5 decomposition of 144Hbar144H contains 5Hbar 5H and 45Hbar45H and the cancellation mechanism arises among these contributions which mirrror the SU5 case. The cancellation mechanism appears to be more generally valid for a larger class of unification models. Specifically the cancellation mechanism may play a role in string model constructions to suppress proton decay from dimension five operators. The mechanism allows for the suppression of proton decay consistent with current data allowing for the possibility that proton decay may be visible in the next round of nucleon stability experiment.
Contributions to Random Energy Models ; In this thesis, we consider several Random Energy Models. This includes Derrida's Random Energy Model REM and Generalized Random Energy Model GREM and a nonhierarchical version BKGREM by Bolthausen and Kistler. The limiting free energy in all these models along with Word GREM, a model proposed by us, turn out to be a cute consequence of large deviation principle LDP. This LDP argument allows us to consider nonGaussian driving distributions as well as external field. We could also consider random trees as the underlying tree structure in GREM. In all these models, as expected, limiting free energy is not 'universal' unlike the SK model. However it is 'rate specific'. Consideration of nonGaussian driving distribution as well as different driving distributions for the different levels of the underlying trees in GREM leads to interesting phenomena. For example in REM, if the Hamiltonian is Binomial with parameter N and p then the existence of phase transition depends on the parameter p. More precisely, phase transition takes place only when p12. For another example, consider a 2 level GREM with exponential driving distribution at the first level and Gaussian in the second with equal weights at both the levels. Then even if the limiting ratio for the second level particles, p2 is 0.00001 very small, the system reduces to a Gaussian REM. On the other hand, if we consider a 2 level GREM with Gaussian driving distribution at the first level and exponential in the second, the system will never reduce to a Gaussian REM. In either case, the system will never reduce to that of an exponential REM. etc.
Multimass schemes for collisionless Nbody simulations ; We present a general scheme for constructing Monte Carlo realizations of equilibrium, collisionless galaxy models with known distribution function DF f0. Our method uses importance sampling to find the sampling DF fs that minimizes the meansquare formal errors in a given set of projections of the DF f0. The result is a multimass Nbody realization of the galaxy model in which interesting'' regions of phasespace are densely populated by lots of lowmass particles, increasing the effective N there, and less interesting regions by fewer, highermass particles. As a simple application, we consider the case of minimizing the shot noise in estimates of the acceleration field for an Nbody model of a spherical Hernquist model. Models constructed using our scheme easily yield a factor 100 reduction in the variance in the central acceleration field when compared to a traditional equalmass model with the same number of particles. When evolving both models with a real Nbody code, the diffusion coefficients in our model are reduced by a similar factor. Therefore, for certain types of problems, our scheme is a practical method for reducing the twobody relaxation effects, thereby bringing the Nbody simulations closer to the collisionless ideal.
Computer model validation with functional output ; A key question in evaluation of computer models is Does the computer model adequately represent reality A sixstep process for computer model validation is set out in Bayarri et al. Technometrics 49 2007 138154 and briefly summarized below, based on comparison of computer model runs with field data of the process being modeled. The methodology is particularly suited to treating the major issues associated with the validation process quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different, but related scenarios. Two complications that frequently arise in practice are the need to deal with highly irregular functional data and the need to acknowledge and incorporate uncertainty in the inputs. We develop methodology to deal with both complications. A key part of the approach utilizes a wavelet representation of the functional data, applies a hierarchical version of the scalar validation methodology to the wavelet coefficients, and transforms back, to ultimately compare computer model output with field output. The generality of the methodology is only limited by the capability of a combination of computational tools and the appropriateness of decompositions of the sort wavelets employed here. The methods and analyses we present are illustrated with a test bed dynamic stress analysis for a particular engineering system.
A delay differential model of ENSO variability Parametric instability and the distribution of extremes ; We consider a delay differential equation DDE model for ElNino Southern Oscillation ENSO variability. The model combines two key mechanisms that participate in ENSO dynamics delayed negative feedback and seasonal forcing. We perform stability analyses of the model in the threedimensional space of its physically relevant parameters. Our results illustrate the role of these three parameters strength of seasonal forcing b, atmosphereocean coupling kappa, and propagation period tau of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are separated by a sharp neutral curve in the b,tau plane at constant kappa. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphereocean coupling kappa increases. In the unstable regime, spontaneous transitions occur in the mean temperature'' it i.e., thermocline depth, period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devil's bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We expect, therefore, similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.
Constrained semianalytical models of Galactic outflows ; We present semianalytic models of galactic outflows, constrained by available observations on high redshift star formation and reionization. Galactic outflows are modeled in a manner akin to models of stellar wind blown bubbles. Large scale outflows can generically escape from low mass halos M109 Msun for a wide range of model parameters but not from high mass halos M 1011 Msun. The gas phase metallicity of the outflow and within the galaxy are computed. Ionization states of different metal species are calculated and used to examine the detectability of metal lines from the outflows. The global influence of galactic outflows is also investigated. Models with only atomic cooled halos significantly fill the IGM at z3 with metals with 2.5ZZsun3.7, the actual extent depending on the efficiency of winds, the IMF, the fractional mass that goes through star formation and the reionization history of the universe. In these models, a large fraction of outflows at z3 are supersonic, hot T 105 K and have low density, making metal lines difficult to detect. They may also result in significant perturbations in the IGM gas on scales probed by the Lymanalpha forest. On the contrary, models including molecular cooled halos with a normal mode of star formation can potentially volume fill the universe at z 8 without drastic dynamic effects on the IGM, thereby setting up a possible metallicity floor 4.0ZZsun3.6. Interestingly, molecular cooled halos with a topheavy'' mode of star formation are not very successful in establishing the metallicity floor because of the additional radiative feedback, that they induce. Abridged
Time series analysis via mechanistic models ; The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plugandplay property. Our work builds on recently developed plugandplay inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae.
Quasirealistic heteroticstring models with vanishing oneloop cosmological constant and perturbatively broken supersymmetry ; Quasirealistic string models in the free fermionic formulation typically contain an anomalous U1, which gives rise to a FayetIliopoulos Dterm that breaks supersymmetry at the oneloop level in string perturbation theory. Supersymmetry is traditionally restored by imposing F and Dflatness on the vacuum. By employing the standard analysis of flat directions we present a quasirealistic three generation string model in which stringent F and Dflat solution do not appear to exist to all orders in the superpotential. We speculate that this result is indicative of the nonexistence of supersymmetric flat F and Dsolutions in this model. We provide some arguments in support of this scenario and discuss its potential implications. BoseFermi degeneracy of the string spectrum implies that the oneloop partition function and hence the oneloop cosmological constant vanishes in the model. If our assertion is correct, this model may represent the first known example with vanishing cosmological constant and perturbatively broken supersymmetry. We discuss the distinctive properties of the internal free fermion boundary conditions that may correspond to a large set of models that share these properties. The geometrical moduli in this class of models are fixed due to asymmetric boundary conditions, whereas absence of supersymmetric flat directions would imply that the supersymmetric moduli are fixed as well and the dilaton may be fixed by hidden sector nonperturbative effects.
An Extended Model for the Evolution of Prebiotic Homochirality A BottomUp Approach to the Origin of Life ; A generalized autocatalytic model for chiral polymerization is investigated in detail. Apart from enantiomeric crossinhibition, the model allows for the autogenic noncatalytic formation of left and righthanded monomers from a substrate with reaction rates epsilonL and epsilonR, respectively. The spatiotemporal evolution of the net chiral asymmetry is studied for models with several values of the maximum polymer length, N. For N2, we study the validity of the adiabatic approximation often cited in the literature. We show that the approximation obtains the correct equilibrium values of the net chirality, but fails to reproduce the short time behavior. We show also that the autogenic term in the full N2 model behaves as a control parameter in a chiral symmetry breaking phase transition leading to full homochirality from racemic initial conditions. We study the dynamics of the N infinity model with symmetric epsilonL epsilonR autogenic formation, showing that it only achieves homochirality for epsilon epsilonc, where epsilonc is an Ndependent critical value. For epsilon leq epsilonc we investigate the behavior of models with several values of N, showing that the net chiral asymmetry grows as tanhN. We show that for a given symmetric autogenic reaction rate, the net chirality and the concentrations of chirally pure polymers increase with the maximum polymer length in the model. We briefly discuss the consequences of our results for the development of homochirality in prebiotic Earth and possible experimental verification of our findings.
Analysis of polysilicon micro beams buckling with temperaturedependent properties ; The suspended electrothermal polysilicon micro beams generate displacements and forces by thermal buckling effects. In the previous electrothermal and thermoelastic models of suspended polysilicon micro beams, the thermomechanical properties of polysilicon have been considered constant over a wide rang of temperature 20 900 degrees C. In reality, the thermomechanical properties of polysilicon depend on temperature and change significantly at high temperatures. This paper describes the development and validation of theoretical and Finite Element Model FEM including the temperature dependencies of polysilicon properties such as thermal expansion coefficient and Young's modulus. In the theoretical models, two parts of elastic deflection model and thermal elastic model of micro beams buckling have been established and simulated. Also, temperature dependent buckling of polysilicon micro beam under high temperature has been modeled by Finite Element Analysis FEA. Analytical results and numerical results using FEA are compared with experimental data available in literature. Their reasonable agreement validates analytical model and FEM. This validation indicates the importance of including temperature dependencies of polysilicon thermomechanical properties such as Coefficient of Thermal Expansion CTE in the previous models.
Twocomponent galaxies with flat rotation curve ; Dynamical properties of twocomponent galaxy models whose stellar density distribution is described by a gammamodel while the total density distribution has a pure r2 profile, are presented. The orbital structure of the stellar component is described by OsipkovMerritt anisotropy, while the dark matter halo is isotropic. After a description of minimum halo models, the positivity of the phasespace density the model consistency is investigated, and necessary and sufficient conditions for consistency are obtained analytically as a function of the stellar inner density slope gamma and anisotropy radius. The explicit phasespace distribution function is recovered for integer values of gamma, and it is shown that while models with gamma417 are consistent when the anisotropy radius is larger than a critical value dependent on gamma, the gamma0 models are unphysical even in the fully isotropic case. The Jeans equations for the stellar component are then solved analytically; in addition, the projected velocity dispersion at the center and at large radii are also obtained analytically for generic values of the anisotropy radius, and it is found that they are given by remarkably simple expressions. The presented models, even though highly idealized, can be useful as starting point for more advanced modeling of the mass distribution of elliptical galaxies in studies combining stellar dynamics and gravitational lensing.
Modeling Spatial and Temporal Dependencies of User Mobility in Wireless Mobile Networks ; Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the nonhomogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a timevariant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic reappearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
Factorization of number into prime numbers viewed as decay of particle into elementary particles conserving energy ; Number theory is considered, by proposing quantum mechanical models and stringlike models at zero and finite temperatures, where the factorization of number into prime numbers is viewed as the decay of particle into elementary particles conserving energy. In these models, energy of a particle labeled by an integer n is assumed or derived to being proportional to ln n. The oneloop vacuum amplitudes, the free energies and the partition functions at finite temperature of the stringlike models are estimated and compared with the zeta functions. The SL2, bf Z modular symmetry, being manifest in the free energies is broken down to the additive symmetry of integers, bf Z, after interactions are turned on. In the dynamical model existing behind the zeta function, prepared are the fields labeled by prime numbers. On the other hand the fields in our models are labeled, not by prime numbers but by integers. Nevertheless, we can understand whether a number is prime or not prime by the decay rate, namely by the corresponding particle can decay or can not decay through interactions conserving energy. Among the models proposed, the supersymmetric stringlike model has the merit of that the zero point energies are cancelled and the energy levels may be stable against radiative corrections.
On the complete classification of the unitary N2 minimal superconformal field theories ; Aiming at a complete classification of unitary N2 minimal models where the assumption of spacetime supersymmetry has been dropped, it is shown that each modular invariant candidate of a partition function for such a theory is indeed the partition function of a minimal model. A family of models constructed via orbifoldings of either the diagonal model or of the spacetime supersymmetric exceptional models demonstrates that there exists a unitary N2 minimal model for every one of the allowed partition functions in the list obtained from Gannon's work. Kreuzer and Schellekens' conjecture that all simple current invariants can be obtained as orbifolds of the diagonal model, even when the extra assumption of highergenus modular invariance is dropped, is confirmed in the case of the unitary N2 minimal models by simple counting arguments.
Is Quantum Logic a Logic ; It is shown that quantum logic is a logic in the very same way in which classical logic is a logic. Soundness and completeness of both quantum and classical logics have been proved for novel lattice models that are not orthomodular and therefore cannot be distributive either as opposed to the standard lattice models that are orthomodular and distributive for the respective logics. Hence, we cannot attribute the orthomodularity to quantum logic itself, and we cannot attribute the distributivity to classical logic itself. The valuations of logics with respect of novel models turn out to be nonnumerical, and therefore truth values and truth tables cannot in general be ascribed to the propositions of logics themselves but only to the variables of some of their models for example, the twovalued Boolean algebra. Logics are, first of all, axiomatic deductive systems, and if we stop short of considering their semantics models, valuations, etc., then both quantum and classical logics will have a completely equal footing in the sense of being two deductive systems that differ from each other in a few axioms and nothing else. There is no ground for considering either of the two logics more proper than the other. Semantics of these logics belong to their models, and we show that there are bigger differences between the two aforementioned classical models than between two corresponding quantum and classical models.
Updated PreMain Sequence tracks at low metallicities for 0.1 MMo1.5 ; Young populations at ZZo are being examined to understand the role of metallicity in the first phases of stellar evolution. For the analysis it is necessary to assign mass and age to PreMain Sequence PMS stars. While it is well known that the mass and age determination of PMS stars is strongly affected by the convection treatment, extending any calibration to metallicities different from solar one is very artificial, in the absence of any calibrators for the convective parameters. For solar abundance, Mixing Lenght Theory models have been calibrated by using the results of 2D radiativehydrodynamical models MLTa2D, that result to be very similar to those computed with nongrey ATLAS9 atmosphere boundary condition and full spectrum of turbolence FST convection model both in the atmosphere and in the interior NEMOFST models. While MLTa2D models are not available for lower metallicities, we extend to lower Z the NEMOFST models, in the educated guess that in such a way we are simulating also at smaller Z the results of MLTa2D. We present PMS models for low mass stars from 0.1 to 1.5 Mo for metallicities FeH 0.5, 1.0 and 2.0. The calculations include the most recent interior physics and the latest generation of nongrey atmosphere models. These evolutionary tracks and isochrones are available in electronic form at a WEB site httpwww.mporzio.astro.it7Etsa
Scaling factors for ab initio vibrational frequencies comparison of uncertainty models for quantified prediction ; Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semiempirical correction of ab initio calculations. A particular attention is devoted to uncertainty evaluation for scaling factors, and to their effect on prediction of observables involving scaled properties. We argue that linear models used for calibration of scaling factors are generally not statistically valid, in the sense that they are not able to fit calibration data within their uncertainty limits. Uncertainty evaluation and uncertainty propagation by statistical methods from such invalid models are doomed to failure. To relieve this problem, a stochastic function is included in the model to account for model inadequacy, according to the Bayesian Model Calibration approach. In this framework, we demonstrate that standard calibration summary statistics, as optimal scaling factor and root mean square, can be safely used for uncertainty propagation only when large calibration sets of precise data are used. For small datasets containing a few dozens of data, a more accurate formula is provided which involves scaling factor calibration uncertainty. For measurement uncertainties larger than model inadequacy, the problem can be reduced to a weighted least squares analysis. For intermediate cases, no analytical estimators were found, and numerical Bayesian estimation of parameters has to be used.
Comparison of Relativistic Iron Line Models ; The analysis of the broad iron line profile in the Xray spectra of active galactic nuclei and black hole Xray binaries allows us to constrain the spin parameter of the black hole. We compare the constraints on the spin value for two Xray sources, MCG63015 and GX 3394, with a broad iron line using present relativistic line models in XSPEC LAOR and KYRLINE. The LAOR model has the spin value set to the extremal value a0.9982, while the KYRLINE model enables direct fitting of the spin parameter. The spin value is constrained mainly by the lower boundary of the broad line, which depends on the inner boundary of the disc emission where the gravitational redshift is maximal. The position of the inner disc boundary is usually identified with the marginally stable orbit which is related to the spin value. In this way the LAOR model can be used to estimate the spin value. We investigate the consistency of the LAOR and KYRLINE models. We find that the spin values evaluated by both models agree within the general uncertainties when applied on the current data. However, the results are apparently distinguishable for higher quality data, such as those simulated for the International Xray Observatory IXO mission. We find that the LAOR model tends to overestimate the spin value and furthermore, it has insufficient resolution which affects the correct determination of the highenergy edge of the broad line.
Cosmic age, Statefinder and Om diagnostics in the decaying vacuum cosmology ; As an extension of LambdaCDM, the decaying vacuum model DV describes the dark energy as a varying vacuum whose energy density decays linearly with the Hubble parameter in the latetimes, rhoLambdat propto Ht, and produces the matter component. We examine the highz cosmic age problem in the DV model, and compare it with LambdaCDM and the YangMills condensate YMC dark energy model. Without employing a dynamical scalar field for dark energy, these three models share a similar behavior of latetime evolution. It is found that the DV model, like YMC, can accommodate the highz quasar APM 082795255, thus greatly alleviates the highz cosmic age problem. We also calculate the Statefinder r,s and the it Om diagnostics in the model. It is found that the evolutionary trajectories of rz and sz in the DV model are similar to those in the kinessence model, but are distinguished from those in LambdaCDM and YMC. The it Omz in DV has a negative slope and its height depends on the matter fraction, while YMC has a rather flat it Omz, whose magnitude depends sensitively on the coupling.
Induction of Word and Phrase Alignments for Automatic Document Summarization ; Current research in automatic single document summarization is dominated by two effective, yet naive approaches summarization by sentence extraction, and headline generation via bagofwords models. While successful in some tasks, neither of these models is able to adequately capture the large set of linguistic devices utilized by humans when they produce summaries. One possible explanation for the widespread use of these models is that good techniques have been developed to extract appropriate training data for them from existing documentabstract and documentheadline corpora. We believe that future progress in automatic summarization will be driven both by the development of more sophisticated, linguistically informed models, as well as a more effective leveraging of documentabstract corpora. In order to open the doors to simultaneously achieving both of these goals, we have developed techniques for automatically producing wordtoword and phrasetophrase alignments between documents and their humanwritten abstracts. These alignments make explicit the correspondences that exist in such documentabstract pairs, and create a potentially rich data source from which complex summarization algorithms may learn. This paper describes experiments we have carried out to analyze the ability of humans to perform such alignments, and based on these analyses, we describe experiments for creating them automatically. Our model for the alignment task is based on an extension of the standard hidden Markov model, and learns to create alignments in a completely unsupervised fashion. We describe our model in detail and present experimental results that show that our model is able to learn to reliably identify word and phraselevel alignments in a corpus of document,abstract pairs.
Disorder chaos and multiple valleys in spin glasses ; We prove that the SherringtonKirkpatrick model of spin glasses is chaotic under small perturbations of the couplings at any temperature in the absence of an external field. The result is proved for two kinds of perturbations a distorting the couplings via OrnsteinUhlenbeck flows, and b replacing a small fraction of the couplings by independent copies. We further prove that the SK model exhibits multiple valleys in its energy landscape, in the weak sense that there are many states with nearminimal energy that are mutually nearly orthogonal. We show that the variance of the free energy of the SK model is unusually small at any temperature. By unusually small' we mean that it is much smaller than the number of sites; in other words, it beats the classical Gaussian concentration inequality, a phenomenon that we call superconcentration'. We prove that the bond overlap in the EdwardsAnderson model of spin glasses is not chaotic under perturbations of the couplings, even large perturbations. Lastly, we obtain sharp lower bounds on the variance of the free energy in the EA model on any bounded degree graph, generalizing a result of Wehr and Aizenman and establishing the absence of superconcentration in this class of models. Our techniques apply for the pspin models and the Random Field Ising Model as well, although we do not work out the details in these cases.
The dispersion of growth of matter perturbations in fR gravity ; We study the growth of matter density perturbations deltam for a number of viable fR gravity models that satisfy both cosmological and local gravity constraints, where the Lagrangian density f is a function of the Ricci scalar R. If the parameter mRf,RRf,R today is larger than the order of 106, linear perturbations relevant to the matter power spectrum evolve with a growth rate sd ln deltamd ln a a is the scale factor that is larger than in the LCDM model. We find the window in the free parameter space of our models for which spatial dispersion of the growth index gamma0 gammaz0 z is the redshift appears in the range of values 0.40 gamma00.55, as well as the region in parameter space for which there is essentially no dispersion and gamma0 converges to values around 0.40gamma00.43. These latter values are much lower than in the LCDM model. We show that these unusual dispersed or converged spectra are present in most of the viable fR models with mz0 larger than the order of 106. These properties will be essential in the quest for fR modified gravity models using future highprecision observations and they confirm the possibility to distinguish clearly most of these models from the LCDM model.
Improved parametrization of the growth index for dark energy and DGP models ; We propose two improved parameterized form for the growth index of the linear matter perturbations I gammazgamma0gammainftygamma0zover z1 and II gammazgamma0gamma1 fraczz1gammainftygamma1gamma0fraczz1alpha. With these forms of gammaz, we analyze the accuracy of the approximation the growth factor f by Omegagammazm for both the omegaCDM model and the DGP model. For the first improved parameterized form, we find that the approximation accuracy is enhanced at the high redshifts for both kinds of models, but it is not at the low redshifts. For the second improved parameterized form, it is found that Omegagammazm approximates the growth factor f very well for all redshifts. For chosen alpha, the relative error is below 0.003 for the LambdaCDM model and 0.028 for the DGP model when Omegam0.27. Thus, the second improved parameterized form of gammaz should be useful for the high precision constraint on the growth index of different models with the observational data. Moreover, we also show that alpha depends on the equation of state omega and the fractional energy density of matter Omegam0, which may help us learn more information about dark energy and DGP models.
Unparticle dark energy ; We examine a dark energy model where a scalar unparticle degree of freedom plays the role of quintessence. In particular, we study a model where the unparticle degree of freedom has a standard kinetic term and a simple mass potential, the evolution is slowly rolling and the field value is of the order of the unparticle energy scale lambdau. We study how the evolution of w depends on the parameters B a function of the unparticle scaling dimension du, the initial value of the field phii or equivalently, lambdau and the present matter density Omegam0. We use observational data from Type Ia supernovae, BAO and CMB to constrain the model parameters and find that these models are not ruled out by the observational data. From a theoretical point of view, an unparticle dark energy model is very attractive, since unparticles being bound states of fundamental fermions are protected from radiative corrections. Further, coupling of unparticles to the standard model fields can be arbitrarily suppressed by raising the fundamental energy scale MF, making the unparticle dark energy model free of most of the problems that plague conventional scalar field quintessence models.
Modeling the light curve of the transient SCP06F6 ; We consider simple models based on core collapse or pairformation supernovae to account for the light curve of the transient SCP06F6. A radioactive decay diffusion model provides estimates of the mass of the required radioactive nickel and the ejecta as functions of the unknown redshift. An opacity change such as by dust formation or a recombination front may account for the rapid decline from maximum. We particularly investigate two specific redshifts z0.143, for which Gaensicke et al. 2008 have proposed that the unidentified broad absorption features in the spectrum of SCP06F6 are C2 Swan bands, and z0.57 based on a crude agreement with the Ca HK and UV ironpeak absorption features that are characteristic of supernovae of various types. The ejected masses and kinetic energies are smaller for a more tightly constrained model invoking envelope recombination. We also discuss the possibilities of circumstellar matter CSM shell diffusion and shock interaction models. In general, opticallythick CSM diffusion models can fit the data with the underlying energy coming from an energetic buried supernova. Models in which the CSM is of lower density so that the shock energy is both rapidly thermalized and radiated tend not to be selfconsistent. We suggest that a model of SCP06F6 worth futher exploration is one in which the redshift is sim 0.57, the spectral features are Ca and iron peak elements, and the light curve is powered by the diffusive release of a substantial amount of energy from nickel decay or from an energetic supernova buried in the ejecta of an LBVlike event.
The dynamics of pulsar glitches Contrasting phenomenology with numerical evolutions ; In this paper we consider a simple twofluid model for pulsar glitches. We derive the basic equations that govern the spin evolution of the system from twofluid hydrodynamics, accounting for the vortex mediated mutual friction force that determines the glitch rise. This leads to a simple bulk model that can be used to describe the main properties of a glitch event resulting from vortex unpinning. In order to model the long term relaxation following the glitch our model would require additional assumptions regarding the repinning of vortices, an issue that we only touch upon briefly. Instead, we focus on comparing the phenomenological model to results obtained from timeevolutions of the linearised twofluid equations, i.e. a hydrodynamic model for glitches. This allows us to study, for the first time, dynamics that was averaged in the bulk model, i.e. consider the various neutron star oscillation modes that are excited during a glitch. The hydroresults are of some relevance for efforts to detect gravitational waves from glitching pulsars, although the conclusions drawn from our rather simple model are pessimistic as far as the detectability of these events is concerned.
Clustering Phase Transitions and Hysteresis Pitfalls in Constructing Network Ensembles ; Ensembles of networks are used as null models in many applications. However, simple null models often show much less clustering than their realworld counterparts. In this paper, we study a model where clustering is enhanced by means of a fugacity term as in the Strauss or triangle model, but where the degree sequence is strictly preserved thus maintaining the quenched heterogeneity of nodes found in the original degree sequence. Similar models had been proposed previously in R. Milo et al., Science 298, 824 2002. We find that our model exhibits phase transitions as the fugacity is changed. For regular graphs identical degrees for all nodes with degree k 2 we find a single first order transition. For all nonregular networks that we studied including Erdos Renyi and scalefree networks we find multiple jumps resembling first order transitions, together with strong hysteresis. The latter transitions are driven by the sudden emergence of cluster cores groups of highly interconnected nodes with higher than average degrees. To study these cluster cores visually, we introduce qclique adjacency plots. We find that these cluster cores constitute distinct communities which emerge spontaneously from the triangle generating process. Finally, we point out that cluster cores produce pitfalls when using the present and similar models as null models for strongly clustered networks, due to the very strong hysteresis which effectively leads to broken ergodicity on realistic time scales.
An Efficient Explicittime Description Method for Timed Model Checking ; Timed model checking, the method to formally verify realtime systems, is attracting increasing attention from both the model checking community and the realtime community. Explicittime description methods verify realtime systems using general model constructs found in standard untimed model checkers. Lamport proposed an explicittime description method using a clockticking process Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Syncbased Explicittime Description Method using rendezvous synchronization steps and the Semaphorebased Explicittime Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the realtime systems. In contrast to timed automata based model checkers like UPPAAL, explicittime description methods can access and store the current time instant for future calculations necessary for many realtime systems, especially those with preemptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.
Strain in Semiconductor CoreShell Nanowires ; We compute strain distributions in coreshell nanowires of zinc blende structure. We use both continuum elasticity theory and an atomistic model, and consider both finite and infinite wires. The atomistic valence forcefield VFF model has only few assumptions. But it is less computationally efficient than the finiteelement FEM continuum elasticity model. The generic properties of the strain distributions in coreshell nanowires obtained based on the two models agree well. This agreement indicates that although the calculations based on the VFF model are computationally feasible in many cases, the continuum elasticity theory suffices to describe the strain distributions in large coreshell nanowire structures. We find that the obtained strain distributions for infinite wires are excellent approximations to the strain distributions in finite wires, except in the regions close to the ends. Thus, our most computationally efficient model, the finiteelement continuum elasticity model developed for infinite wires, is sufficient, unless edge effects are important. We give a comprehensive discussion of strain profiles. We find that the hydrostatic strain in the core is dominated by the axial straincomponent, varepsilonz z. We also find that although the individual strain components have a complex structure, the hydrostatic strain shows a much simpler structure. All inplane strain components are of similar magnitude. The nonplanar offdiagonal straincomponents varepsilonx z and varepsilony z are small but nonvanishing. Thus the material is not only stretched and compressed but also warped. The models used can be extended for study of wurtzite nanowire structures, as well as nanowires with multiple shells.
Relating toy models of quantum computation comprehension, complementarity and dagger mix autonomous categories ; Toy models have been used to separate important features of quantum computation from the rich background of the standard Hilbert space model. Category theory, on the other hand, is a general tool to separate components of mathematical structures, and analyze one layer at a time. It seems natural to combine the two approaches, and several authors have already pursued this idea. We explore categorical comprehension construction as a tool for adding features to toy models. We use it to comprehend quantum propositions and probabilities within the basic model of finitedimensional Hilbert spaces. We also analyze complementary quantum observables over the category of sets and relations. This leads into the realm of test spaces, a wellstudied model. We present one of many possible extensions of this model, enabled by the comprehension construction. Conspicuously, all models obtained in this way carry the same categorical structure, extending the familiar dagger compact framework with the complementation operations. We call the obtained structure dagger mix autonomous, because it extends mix autonomous categories, popular in computer science, in a similar way like dagger compact structure extends compact categories. Dagger mix autonomous categories seem to arise quite naturally in quantum computation, as soon as complementarity is viewed as a part of the global structure.
Modelling Herschel observations of infrareddark clouds in the HiGAL survey ; We demonstrate the use of the 3D Monte Carlo radiative transfer code PHAETHON to model infrareddark clouds IRDCs that are externally illuminated by the interstellar radiation field ISRF. These clouds are believed to be the earliest observed phase of highmass star formation, and may be the highmass equivalent of lowermass prestellar cores. We model three different cases as examples of the use of the code, in which we vary the mass, density, radius, morphology and internal velocity field of the IRDC. We show the predicted output of the models at different wavelengths chosen to match the observing wavebands of Herschel and Spitzer. For the wavebands of the long wavelength SPIRE photometer on Herschel, we also pass the model output through the SPIRE simulator to generate output images that are as close as possible to the ones that would be seen using SPIRE. We then analyse the images as if they were real observations, and compare the results of this analysis with the results of the radiative transfer models. We find that detailed radiative transfer modelling is necessary to accurately determine the physical parameters of IRDCs e.g. dust temperature, density profile. This method is applied to study G29.5500.18, an IRDC observed by the Herschel Infrared Galactic Plane survey HiGAL, and in the future it will be used to model a larger sample of IRDCs from the same survey.
Confronting Dark Energy Models using Galaxy Cluster Number Counts ; The mass function of clustersize halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat LambdaCDM model. The comparison with LambdaCDM is performed by a twostep process. Firstly, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using SNe type Ia, the CMB shift parameter and BAO. Apart from a brane world inspired cosmology, it is found that the derived Hubble relation of theremaining models reproduce the LambdaCDM results approximately with the same degree of statistical confidence. Secondly, in order to attempt distinguish the different dark energy models from the expectations of LambdaCDM, we analyze the predicted clustersize halo redshift distribution on the basis of two future cluster surveys i an Xray survey based on the tt eROSITA satellite, and ii a SunayevZeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the LambdaCDM cosmology, while the predictions of 4 models are statistically equivalent to those of the LambdaCDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggest that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.
Pluralistic Modeling of Complex Systems ; The modeling of complex systems such as ecological or socioeconomic systems can be very challenging. Although various modeling approaches exist, they are generally not compatible and mutually consistent, and empirical data often do not allow one to decide what model is the right one, the best one, or most appropriate one. Moreover, as the recent financial and economic crisis shows, relying on a single, idealized model can be very costly. This contribution tries to shed new light on problems that arise when complex systems are modeled. While the arguments can be transferred to many different systems, the related scientific challenges are illustrated for social, economic, and traffic systems. The contribution discusses issues that are sometimes overlooked and tries to overcome some frequent misunderstandings and controversies of the past. At the same time, it is highlighted how some longstanding scientific puzzles may be solved by considering nonlinear models of heterogeneous agents with spatiotemporal interactions. As a result of the analysis, it is concluded that a paradigm shift towards a pluralistic or possibilistic modeling approach, which integrates multiple world views, is overdue. In this connection, it is argued that it can be useful to combine many different approaches to obtain a good picture of reality, even though they may be inconsistent. Finally, it is identified what would be profitable areas of collaboration between the socioeconomic, natural, and engineering sciences.
Metabifurcation analysis of a mean field model of the cortex ; Mean field models MFMs of cortical tissue incorporate salient features of neural masses to model activity at the population level. One of the common aspects of MFM descriptions is the presence of a high dimensional parameter space capturing neurobiological attributes relevant to brain dynamics. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two families. After investigating and characterizing these, we discuss their essential differences in terms of four important aspects power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli, distribution of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They emerge when the parameter space is partitioned according to bifurcation responses. This partitioning cannot be achieved by the investigation of only a small number of parameter sets, but is the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
TopDown Multilevel Simulation of Tumor Response to Treatment in the Context of In Silico Oncology ; The aim of this chapter is to provide a brief introduction into the basics of a topdown multilevel tumor dynamics modeling method primarily based on discrete entity consideration and manipulation. The method is clinically oriented, one of its major goals being to support patient individualized treatment optimization through experimentation in silico on the computer. Therefore, modeling of the treatment response of clinical tumors lies at the epicenter of the approach. Macroscopic data, including i.a. anatomic and metabolic tomographic images of the tumor, provide the framework for the integration of data and mechanisms pertaining to lower and lower biocomplexity levels such as clinically approved cellular and molecular biomarkers. The method also provides a powerful framework for the investigation of multilevel multiscale tumor biology in the generic investigational context. The Oncosimulator, a multiscale physics and biomedical engineering concept and construct tightly associated with the method and currently undergoing clinical adaptation, optimization and validation, is also sketched. A brief outline of the approach is provided in natural language. Two specific models of tumor response to chemotherapeutic and radiotherapeutic schemes are briefly outlined and indicative results are presented in order to exemplify the application potential of the method. The chapter concludes with a discussion of several important aspects of the method including i.a. numerical analysis aspects, technological issues, model extensions and validation within the framework of actual running clinicogenomic trials. Future perspectives and challenges are also addressed.
A three dimensional stochastic Model for Claim Reserving ; Within the Solvency II framework the insurance industry requires a realistic modelling of the risk processes relevant for its business. Every insurance company should be capable of running a holistic risk management process to meet this challenge. For property and casualty PC insurance companies the risk adequate modelling of the claim reserves is a very important topic as this liabilities determine up to 70 percent of the balance sum. We propose a three dimensional 3D stochastic model for claim reserving. It delivers consistently the reserve's distribution function as well as the distributions of all parts of it that are needed for accounting and controlling. The calibration methods for the model are well known from data analysis and they are applicable in an practitioner environment. We evaluate the model numerically by the help of Monte Carlo MC simulation. Classical actuarial reserve models are two dimensional 2D. They lead to an estimation algorithm that is applied on a 2D matrix, the run off triangle. Those methods for instance the Chain Ladder or the Bornhuetter Ferguson method are widely used in practice nowadays and give rise to several problems They estimate the reserves' expectation and some of them under very restriction assumptions the variance. They provide no information about the tail of the reserve's distribution, what would be most important for risk calculation, for assessing the insurance company's financial stability and economic situation. Additionally, due to the projection of the claim process into a two dimensional space the results are very often distorted and dependent on the kind of projection. Therefore we extend the classical 2D models to a 3D space because we find inconsistencies generated by inadequate projections into the 2D spaces.
Hunting Down the Best Model of Inflation with Bayesian Evidence ; We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a welldefined statistical way and to determine the current best model of inflation. The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a selfinteracting potential of power p4 are strongly disfavoured. The class of small field models as a whole has posterior odds of approximately 31 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data such as the Planck data and the techniques introduced here should allow us to identify conclusively the best inflationary model.
Cluster morphologies and modelindependent YSZ estimates from Bolocam SunyaevZel'dovich images ; We present initial results from our ongoing program to image the SunyaevZel'dovich SZ effect in galaxy clusters at 143 GHz using Bolocam; five clusters and one blank field are described in this manuscript. The images have a resolution of 58 arcsec and a radius of 67 arcmin, which is approximately r500 2r500 for these clusters. The beamsmoothed RMS is 10 uKCMB in these images; with this sensitivity we are able to detect SZ signal to beyond r500 in binned radial profiles. We have fit our images to beta and Nagai models, fixing spherical symmetry or allowing for ellipticity in the plane of the sky, and we find that the bestfit parameter values are in general consistent with those obtained from other Xray and SZ data. Our data show no clear preference for the Nagai model or the beta model due to the limited spatial dynamic range of our images. However, our data show a definitive preference for elliptical models over spherical models. The weighted mean ellipticity of the five clusters is 0.27 0.03, consistent with results from Xray data. Additionally, we obtain modelindependent estimates of Y500, the integrated SZ yparameter over the cluster face to a radius of r500, with systematicsdominated uncertainties of 10. Our Y500 values, which are free from the biases associated with modelderived Y500 values, scale with cluster mass in a way that is consistent with both selfsimilar predictions and expectations of a 10 intrinsic scatter.
Interacting model of new agegraphic dark energy observational constraints and age problem ; Many dark energy models fail to pass the cosmic age test because of the old quasar APM 082795255 at redshift z3.91, the LambdaCDM model and holographic dark energy models being no exception. In this paper, we focus on the topic of age problem in the new agegraphic dark energy NADE model. We determine the age of the universe in the NADE model by fitting the observational data, including type Ia supernovae SNIa, baryon acoustic oscillations BAO and the cosmic microwave background CMB. We find that the NADE model also faces the challenge of the age problem caused by the old quasar APM 082795255. In order to overcome such a difficulty, we consider the possible interaction between dark energy and dark matter. We show that this quasar can be successfully accommodated in the interacting new agegraphic dark energy INADE model at the 2sigma level under the current observational constraints.
Lattice Models of Nonequilibrium Bacterial Dynamics ; We study a model of self propelled particles exhibiting run and tumble dynamics on lattice. This nonBrownian diffusion is characterised by a random walk with a finite persistence length between changes of direction, and is inspired by the motion of bacteria such as E. coli. By defining a class of models with multiple species of particle and transmutation between species we can recreate such dynamics. These models admit exact analytical results whilst also forming a counterpart to previous continuum models of run and tumble dynamics. We solve the externally driven noninteracting and zerorange versions of the model exactly and utilise a field theoretic approach to derive the continuum fluctuating hydrodynamics for more general interactions. We make contact with prior approaches to run and tumble dynamics off lattice and determine the steady state and linear stability for a class of crowding interactions, where the jump rate decreases as density increases. In addition to its interest from the perspective of nonequilibrium statistical mechanics, this lattice model constitutes and efficient tool to simulate a class of interacting run and tumble models relevant to bacterial motion, so long as certain conditions that we derive are met.
Formation of Gyrs old black holes in the center of galaxies within the LemaitreTolman model ; In this article we present a model of formation of a galaxy with a black hole in the center. It is based on the LemaitreTolman solution and is a refinement of an earlier model. The most important improvement is the choice of the interior geometry of the black hole allowing for the formation of Gyrs old black holes. Other refinements are the use of an arbitrary Friedmann model as the background unperturbed initial state and the adaptation of the model to an arbitrary density profile of the galaxy. Our main interest was the M87 galaxy NGC 4486, which hosts a supermassive black hole of mass 3.2cdot 109Modot. It is shown that for this particular galaxy, within the framework of our model and for the initial state being a perturbation of the LambdaCDM model, the age of the black hole can be up to 12.7 Gyrs. The dependence of the model on the chosen parameters at the time of last scattering was also studied. The maximal age of the black hole as a function of the Omegam and OmegaLambda parameters for the M87 galaxy can be 3.717 or 12.708 Gyr.
Sparse Volterra and Polynomial Regression Models Recoverability and Estimation ; Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genomewide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by leastsquares based or kernel regression methods. To this end, compressed sampling CS approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on weighted Lassobased schemes, an adaptive RLStype algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties RIP are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel weighted adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotypephenotype analysis.
Bayesian design of synthetic biological systems ; Here we introduce a new design framework for synthetic biology that exploits the advantages of Bayesian model selection. We will argue that the difference between inference and design is that in the former we try to reconstruct the system that has given rise to the data that we observe, while in the latter, we seek to construct the system that produces the data that we would like to observe, i.e. the desired behavior. Our approach allows us to exploit methods from Bayesian statistics, including efficient exploration of models spaces and highdimensional parameter spaces, and the ability to rank models with respect to their ability to generate certain types of data. Bayesian model selection furthermore automatically strikes a balance between complexity and predictive or explanatory performance of mathematical models. In order to deal with the complexities of molecular systems we employ an approximate Bayesian computation scheme which only requires us to simulate from different competing models in order to arrive at rational criteria for choosing between them. We illustrate the advantages resulting from combining the design and modeling or insilico prototyping stages currently seen as separate in synthetic biology by reference to deterministic and stochastic model systems exhibiting adaptive and switchlike behavior, as well as bacterial twocomponent signaling systems.
Optimal allocation patterns and optimal seed mass of a perennial plant ; We present a novel optimal allocation model for perennial plants, in which assimilates are not allocated directly to vegetative or reproductive parts but instead go first to a storage compartment from where they are then optimally redistributed. We do not restrict considerations purely to periods favourable for photosynthesis, as it was done in published models of perennial species, but analyse the whole life period of a perennial plant. As a result, we obtain the general scheme of perennial plant development, for which annual and monocarpic strategies are special cases. We not only rederive predictions from several previous optimal allocation models, but also obtain more information about plants' strategies during transitions between favourable and unfavourable seasons. One of the model's predictions is that a plant can begin to reestablish vegetative tissues from storage, some time before the beginning of favourable conditions, which in turn allows for better production potential when conditions become better. By means of numerical examples we show that annual plants with single or multiple reproduction periods, monocarps, evergreen perennials and polycarpic perennials can be studied successfully with the help of our unified model. Finally, we build a bridge between optimal allocation models and models describing tradeoffs between size and the number of seeds a modelled plant can control the distribution of not only allocated carbohydrates but also seed size. We provide sufficient conditions for the optimality of producing the smallest and largest seeds possible.
Global SelfSimilar Protostellar DiskWind Models ; The magnetocentrifugal disk wind mechanism is the leading candidate for producing the largescale, bipolar jets commonly seen in protostellar systems. I present a detailed formulation of a global, radially selfsimilar model for a nonideal disk that launches a magnetocentrifugal wind. This formulation generalizes the conductivity tensor formalism previously used in radially localized disk models. The model involves matching a solution of the equations of nonideal MHD describing matter in the disk to a solution of the equations of ideal MHD describing a cold wind. The disk solution must pass smoothly through the sonic point, the wind solution must pass smoothly through the Alfv'en point, and the two solutions must match at the diskwind interface. This model includes for the first time a selfconsistent treatment of the evolution of magnetic flux threading the disk, which can change on the disk accretion timescale. The formulation presented here also allows a realistic conductivity profile for the disk to be used in a global diskwind model for the first time. The physical constraints on the model solutions fix the distribution of the magnetic field threading the disk, the midplane accretion speed, and the midplane migration speed of flux surfaces. I present a representative solution that corresponds to a disk in the ambipolar conductivity regime with a nominal neutralmattermagneticfield coupling parameter that is constant along field lines, matched to a wind solution. I conclude with a brief discussion of the importance of selfsimilar diskwind models in studying global processes such as dust evolution in protostellar systems.
A recursive approach to the On model on random maps via nested loops ; We consider the On loop model on tetravalent maps and show how to rephrase it into a model of bipartite maps without loops. This follows from a combinatorial decomposition that consists in cutting the On model configurations along their loops so that each elementary piece is a map that may have arbitrary even face degrees. In the induced statistics, these maps are drawn according to a Boltzmann distribution whose parameters the face weights are determined by a fixed point condition. In particular, we show that the dense and dilute critical points of the On model correspond to bipartite maps with large faces i.e. whose degree distribution has a fat tail. The reexpression of the fixed point condition in terms of linear integral equations allows us to explore the phase diagram of the model. In particular, we determine this phase diagram exactly for the simplest version of the model where the loops are rigid. Several generalizations of the model are discussed.
Probing EWSB Naturalness in Unified SUSY Models with Dark Matter ; We have studied Electroweak Symmetry Breaking EWSB finetuning in the context of two unified Supersymmetry scenarios the Constrained Minimal Supersymmetric Model CMSSM and models with NonUniversal Higgs Masses NUHM, in light of current and upcoming direct detection dark matter experiments. We consider both those models that satisfy a onesided bound on the relic density of neutralinos, Omegachi h2 0.12, and also the subset that satisfy the twosided bound in which the relic density is within the 2 sigma best fit of WMAP7 BAO H0 data. We find that current direct detection searches for dark matter probe the least finetuned regions of parameterspace, or equivalently those of lowest Higgs mass parameter mu, and will tend to probe progressively more and more finetuned models, though the trend is more pronounced in the CMSSM than in the NUHM. Additionally, we examine several subsets of model points, categorized by common mass hierarchies; Mchi0 sim Mchipm, Mchi0 sim Mstau, Mchi0 sim Mstop1, the light and heavy Higgs poles, and any additional models classified as other; the relevance of these mass hierarchies is their connection to the preferred neutralino annihilation channel that determines the relic abundance. For each of these subsets of models we investigated the degree of finetuning and discoverability in current and next generation direct detection experiments.
Automated analysis of quantitative image data using isomorphic functional mixed models, with application to proteomics data ; Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the waveletbased functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate.
Coulomb Gas and SineGordon Model in Arbitrary Dimension ; The sineGordon SG, i.e. periodic scalar field theory is known to play an important role in d2 dimensions. A paradigmatic example is the topological phase transition of the vortex dynamics in superfluid films and layered superconductors which are described by SG type models. Periodic scalar potentials find applications in d4 dimensions, too. Higgs, inflaton and axion physics are examples where scalar fields naturally appear, thus, the SG model can be used instead of the usual polynomial one. The SG quantum field theory can be mapped onto the neutral Coulombgas CG in arbitrary dimension and the renormalization group RG study of the ddimensional CG model was obtained in the dilute gas approximation. It signals a single phase for d2, however, it was shown recently, that a suitable generalization of the SG model can posses a topological phase transitions in d4 dimensions. Our goals in this work are i to map out the phase structure of the original SG and the equivalent neutral CG models by the functional RG method in arbitrary dimension, ii to compare the 3dimensional SG and isotropic XY spin models and show that they belong to different universality classes, iii to study the consequences of the findings on higgs, inflaton, axion models and on the topological phase transition in higher dimensions.
Threedimensional radiative transfer modeling of AGN dusty tori as a clumpy twophase medium ; We investigate the emission of active galactic nuclei AGN dusty tori in the infrared domain. Following theoretical predictions coming from hydrodynamical simulations, we model the dusty torus as a 3D twophase medium with highdensity clumps and lowdensity medium filling the space between the clumps. Spectral energy distributions SED and images of the torus at different wavelengths are obtained using 3D Monte Carlo radiative transfer code SKIRT. Our approach of generating clumpy structure allows us to model tori with single clumps, complex structures of merged clumps or interconnected spongelike structure. A corresponding set of clumpsonly models and models with smooth dust distribution is calculated for comparison. We found that dust distribution, optical depth, clump size and their actual arrangement in the innermost region, all have an impact on the shape of near and midinfrared SED. The 10 micron silicate feature can be suppressed for some parameters, but models with smooth dust distribution are also able to produce a wide range of the silicate feature strength. Finally, we find that having the dust distributed in a twophase medium, might offer a natural solution to the lack of emission in the nearinfrared, compared to observed data, which affects clumpy models currently available in the literature.
Gossip Learning with Linear Models on Fully Distributed Data ; Machine learning over fully distributed data poses an important problem in peertopeer P2P applications. In this model we have one data record at each network node, but without the possibility to move raw data due to privacy considerations. For example, user profiles, ratings, history, or sensor readings can represent this case. This problem is difficult, because there is no possibility to learn local models, the system model offers almost no guarantees for reliability, yet the communication cost needs to be kept low. Here we propose gossip learning, a generic approach that is based on multiple models taking random walks over the network in parallel, while applying an online learning algorithm to improve themselves, and getting combined via ensemble learning methods. We present an instantiation of this approach for the case of classification with linear models. Our main contribution is an ensemble learning method whichthrough the continuous combination of the models in the networkimplements a virtual weighted voting mechanism over an exponential number of models at practically no extra cost as compared to independent random walks. We prove the convergence of the method theoretically, and perform extensive experiments on benchmark datasets. Our experimental analysis demonstrates the performance and robustness of the proposed approach.
Constraints on tCDM models as holographic and agegraphic dark energy with the observational Hubble parameter data ; The newly released observational Hz data OHD is used to constrain LambdatCDM models as holographic and agegraphic dark energy. By the use of the length scale and time scale as the IR cutoff including Hubble horizon HH, future event horizon FEH, age of the universe AU, and conformal time CT, we achieve four different LambdatCDM models which can describe the present cosmological acceleration respectively. In order to get a comparison between such LambdatCDM models and standard LambdaCDM model, we use the information criteria IC, Omz diagnostic, and statefinder diagnostic to measure the deviations. Furthermore, by simulating a larger Hubble parameter data sample in the redshift range of 0.1z2.0, we get the improved constraints and more sufficient comparison. We show that OHD is not only able to play almost the same role in constraining cosmological parameters as SNe Ia does but also provides the effective measurement of the deviation of the DE models from standard LambdaCDM model. In the holographic and agegraphic scenarios, the results indicate that the FEH is more preferable than HH scenario. However, both two time scenarios show better approximations to LambdaCDM model than the length scenarios.
AgentBased Modeling of Intracellular Transport ; We develop an agentbased model of the motion and pattern formation of vesicles. These intracellular particles can be found in four different modes of undirected and directed motion and can fuse with other vesicles. While the size of vesicles follows a lognormal distribution that changes over time due to fusion processes, their spatial distribution gives rise to distinct patterns. Their occurrence depends on the concentration of proteins which are synthesized based on the transcriptional activities of some genes. Hence, differences in these spatiotemporal vesicle patterns allow indirect conclusions about the unknown impact of these genes. By means of agentbased computer simulations we are able to reproduce such patterns on real temporal and spatial scales. Our modeling approach is based on Brownian agents with an internal degree of freedom, theta, that represents the different modes of motion. Conditions inside the cell are modeled by an effective potential that differs for agents dependent on their value theta. Agent's motion in this effective potential is modeled by an overdampted Langevin equation, changes of theta are modeled as stochastic transitions with values obtained from experiments, and fusion events are modeled as spacedependent stochastic transitions. Our results for the spatiotemporal vesicle patterns can be used for a statistical comparison with experiments. We also derive hypotheses of how the silencing of some genes may affect the intracellular transport, and point to generalizations of the model.
Classical and quantum mechanics of the nonrelativistic Snyder model in curved space ; The Snyderde Sitter SdS model is a generalization of the Snyder model to a spacetime background of constant curvature. It is an example of noncommutative spacetime admitting two fundamental scales besides the speed of light, and is invariant under the action of the de Sitter group. Here we consider its nonrelativistic counterpart, i.e. the Snyder model restricted to a threedimensional sphere, and the related model obtained by considering the antiSnyder model on a pseudosphere, that we call antiSnyderde Sitter aSdS. By means of a nonlinear transformation relating the SdS phase space variables to canonical ones, we are able to investigate the classical and the quantum mechanics of a free particle and of an oscillator in this framework. As in their flat space limit, the SdS and aSdS models exhibit rather different properties. In the SdS case, a lower bound on the localization in position and momentum space arises, which is not present in the aSdS model. In the aSdS case, instead, a specific combination of position and momentum coordinates cannot exceed a constant value. We explicitly solve the classical and the quantum equations for the motion of the free particle and of the harmonic oscillator. In both the SdS and aSdS cases, the frequency of the harmonic oscillator acquires a dependence on the energy.
The 3Dimensional Distribution of Dust in NGC 891 ; We produce threedimensional MonteCarlo radiative transfer models of the edgeon spiral galaxy NGC 891, a fastrotating galaxy thought to be an analogue to the Milky Way. The models contain realistic spiral arms and a fractal distribution of clumpy dust. We fit our models to Hubble Space Telescope images corresponding to the B and I bands, using shapelet analysis and a genetic algorithm to generate 30 statistically bestfitting models. These models have a strong preference for spirality and clumpiness, with average faceon attenuation decreasing from 0.240.16 to 0.030.03 mag in the BI band between 0.5 and 2 radial scalelengths. Most of the attenuation comes from small highdensity clumps with low 10 filling factors. The fraction of dust in clumps is broadly consistent with results from fitting NGC 891's spectral energy distribution. Because of scattering effects and the intermixed nature of the dust and starlight, attenuation is smaller and less wavelengthdependent than the integrated dust columndensity. Our clumpy models typically have higher attenuation at low inclinations than previous radiative transfer models using smooth distributions of stars and dust, but similar attenuation at inclinations above 70 degrees. At all inclinations most clumpy models have less attenuation than expected from previous estimates based on minimizing scatter in the TullyFisher relation. Masstolight ratios are higher and the intrinsic scatter in the TullyFisher relation is larger than previously expected for galaxies similar to NGC 891. The attenuation curve changes as a function of inclination, with RB,BIABEBI increasing by 0.75 from faceon to nearedgeon orientations.
Hints of Standard Model Higgs Boson at the LHC and Light Dark Matter Searches ; The most recent results of searches at the LHC for the Higgs boson h have turned up possible hints of such a particle with mass mh about 125 GeV consistent with standard model SM expectations. This has many potential implications for the SM and beyond. We consider some of them in the contexts of a simple Higgsportal dark matter DM model, the SM plus a real gaugesinglet scalar field D as the DM candidate, and a couple of its variations. In the simplest model with one Higgs doublet and three or four generations of fermions, for D mass mD mh2 the invisible decay h DD tends to have a substantial branching ratio. If future LHC data confirm the preliminary Higgs indications, mD will have to exceed mh2. To keep the DM lighter than mh2, one will need to extend the model and also satisfy constraints from DM direct searches. The latter can be accommodated if the model provides sizable isospin violation in the DMnucleon interactions. We explore this in a twoHiggsdoublet model combined with the scalar field D. This model can offer a 125GeV SMlike Higgs and a light DM candidate having isospinviolating interactions with nucleons at roughly the required level, albeit with some degree of finetuning.
Outlook on the Higgs particles, masses and physical bounds in the Two HiggsDoublet Model ; The Higgs sector of models beyond the standard model requires special attention and study, since through them, a natural explanation can be offered to current questions such as the big differences in the values of the masses of the quarks hierarchy of masses, the possible generation of flavor changing neutral currents inspired by the evidence about the oscillations of neutrinos, besides the possibility that some models, with more complicated symmetries than those of the standard model, have a non standard low energy limit. The simplest extension of the standard model known as the twoHiggsdoubletmodel 2HDM involves a second Higgs doublet. The 2HDM predicts the existence of five scalar particles three neutral A0, h0, H0 and two charged Hpm. The purpose of this work is to determine in a natural and easy way the mass eigenstates and masses of these five particles, in terms of the parameters lambdai introduced in the minimal extended Higgs sector potential that preserves the CP symmetry. We discuss several cases of Higgs mixings and the one in which two neutral states are degenerate. As the values of the quartic interactions between the scalar doublets are not theoretically determined, it is of great interest to explore and constrain their values, therefore we analize the stability and triviality bounds using the Lagrange multipliers method and numerically solving the renormalization group equations. Through the former results one can establish the region of validity of the model under several circumstances considered in the literature.
Lassotype estimators for Semiparametric Nonlinear MixedEffects Models Estimation ; Parametric nonlinear mixed effects models NLMEs are now widely used in biometrical studies, especially in pharmacokinetics research and HIV dynamics models, due to, among other aspects, the computational advances achieved during the last years. However, this kind of models may not be flexible enough for complex longitudinal data analysis. Semiparametric NLMEs SNMMs have been proposed by Ke and Wang 2001. These models are a good compromise and retain nice features of both parametric and nonparametric models resulting in more flexible models than standard parametric NLMEs. However, SNMMs are complex models for which estimation still remains a challenge. The estimation procedure proposed by Ke and Wang 2001 is based on a combination of loglikelihood approximation methods for parametric estimation and smoothing splines techniques for nonparametric estimation. In this work, we propose new estimation strategies in SNMMs. On the one hand, we use the Stochastic Approximation version of EM algorithm Delyon et al., 1999 to obtain exact ML and REML estimates of the fixed effects and variance components. On the other hand, we propose a LASSOtype method to estimate the unknown nonlinear function. We derive oracle inequalities for this nonparametric estimator. We combine the two approaches in a general estimation procedure that we illustrate with simulated and real data.
Proceedings 7th Workshop on ModelBased Testing ; This volume contains the proceedings of the Seventh Workshop on ModelBased Testing MBT 2012, which was held on 25 March, 2012 in Tallinn, Estonia, as a satellite event of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012. The workshop is devoted to modelbased testing of both software and hardware. Modelbased testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The first MBT workshop was held in 2004, in Barcelona. At that time MBT already had become a hot topic, but the MBT workshop was the first event devoted mostly to this topic. Since that time the area has generated enormous scientific interest, and today there are several specialized workshops and more broad conferences on software and hardware design and quality assurance covering model based testing. MBT has become one of the most powerful system analysis tools, one of the latest hot topic related is applying MBT in security analysis and testing. MBT workshop tries to keep up with current trends. In 2012 industrial paper category was added to the program and two industrial papers were accepted by the program committee.
Residual analysis methods for spacetime point processes with applications to earthquake forecast models in California ; Modern, powerful techniques for the residual analysis of spatialtemporal point process models are reviewed and compared. These methods are applied to California earthquake forecast models used in the Collaboratory for the Study of Earthquake Predictability CSEP. Assessments of these earthquake forecasting models have previously been performed using simple, lowpower means such as the Ltest and Ntest. We instead propose residual methods based on rescaling, thinning, superposition, weighted Kfunctions and deviance residuals. Rescaled residuals can be useful for assessing the overall fit of a model, but as with thinning and superposition, rescaling is generally impractical when the conditional intensity lambda is volatile. While residual thinning and superposition may be useful for identifying spatial locations where a model fits poorly, these methods have limited power when the modeled conditional intensity assumes extremely low or high values somewhere in the observation region, and this is commonly the case for earthquake forecasting models. A recently proposed hybrid method of thinning and superposition, called superthinning, is a more powerful alternative.
Critical behavior of the geometrical spin clusters and interfaces in the twodimensional thermalized bond Ising model ; The fractal dimensions and the percolation exponents of the geometrical spin clusters of like sign at criticality, are obtained numerically for an Ising model with temperaturedependent annealed bond dilution, also known as the thermalized bond Ising model TBIM, in two dimensions. For this purpose, a modified Wolff singlecluster Monte Carlo simulation is used to generate equilibrium spin configurations on square lattices in the critical region. A tiebreaking rule is employed to identify nonintersecting spin cluster boundaries along the edges of the dual lattice. The values obtained for the fractal dimensions of the spanning geometrical clusters Dc, and their interfaces DI, are in perfect agreement with those reported for the standard twodimensional ferromagnetic Ising model. Furthermore, the variance of the winding angles, results in a diffusivity kappa3 for the twodimensional thermalized bond Ising model, thus placing it in the universality class of the regular Ising model. A finitesize scaling analysis of the largest geometrical clusters, results in a reliable estimation of the critical percolation exponents for the geometrical clusters in the limit of an infinite lattice size. The percolation exponents thus obtained, are also found to be consistent with those reported for the regular Ising model. These consistencies are explained in terms of the Fisher renormalization relations, which express the thermodynamic critical exponents of systems with annealed bond dilution in terms of those of the regular model system.
Ghost dark energy in fR model of gravity ; We study a correspondence between fR model of gravity and a phenomenological kind of dark energy DE, which is known as QCD ghost dark energy. Since this kind of dark energy is not stable in the context of Einsteinian theory of gravity and BransDicke model of gravity, we consider two kinds of correspondence between modified gravity and DE. By studding the dynamical evolution of model and finding relevant quantities such as, equation of state parameter, deceleration parameter, dimensionless density parameter, we show that the model can describe the present Universe and also the EoS parameter can cross the phantom divide line without needs to any kinetic energy with negative sign. Furthermore, by obtaining the adiabatic squared sound speed of the model for different cases of interaction, we show that this model is stable. Finally, we fit this model with supernova observational data in a non interaction case and we find the best values of parameter at 1sigma confidence interval as; f00.9580.070.25, beta0,2560.20.1, and Omm0 0.230.30.15. These bestfit values show that dark energy equation of state parameter, omd0, can cross the phantom divide line at the present time.
Modelling the emergence of spatial patterns of economic activity ; Understanding how spatial configurations of economic activity emerge is important when formulating spatial planning and economic policy. A simple model was proposed by Simon, who assumed that firms grow at a rate proportional to their size, and that new divisions of firms with certain probabilities relocate to other firms or to new centres of economic activity. Simon's model produces realistic results in the sense that the sizes of economic centres follow a Zipf distribution, which is also observed in reality. It lacks realism in the sense that mechanisms such as cluster formation, congestion defined as an overly high density of the same activities and dependence on the spatial distribution of external parties clients, labour markets are ignored. The present paper proposed an extension of the Simon model that includes both centripetal and centrifugal forces. Centripetal forces are included in the sense that firm divisions are more likely to settle in locations that offer a higher accessibility to other firms. Centrifugal forces are represented by an aversion of a too high density of activities in the potential location. The model is implemented as an agentbased simulation model in a simplified spatial setting. By running both the Simon model and the extended model, comparisons are made with respect to their effects on spatial configurations. To this end a series of metrics are used, including the ranksize distribution and indices of the degree of clustering and concentration.
JAKSTAT signalling an executable model assembled from moleculecentred modules demonstrating a moduleoriented database concept for systems and synthetic biology ; We describe a moleculeoriented modelling approach based on a collection of Petri net models organized in the form of modules into a prototype database accessible through a web interface. The JAKSTAT signalling pathway with the extensive crosstalk of its components is selected as case study. Each Petri net module represents the reactions of an individual protein with its specific interaction partners. These Petri net modules are graphically displayed, can be executed individually, and allow the automatic composition into coherent models containing an arbitrary number of molecular species chosen ad hoc by the user. Each module contains metadata for documentation purposes and can be extended to a wikilike minireview. The database can manage multiple versions of each module. It supports the curation, documentation, version control, and update of individual modules and the subsequent automatic composition of complex models, without requiring mathematical skills. Modules can be semi automatically recombined according to user defined scenarios e.g. gene expression patterns in given cell types, under certain physiological conditions, or states of disease. Adding a localisation component to the module database would allow to simulate models with spatial resolution in the form of coloured Petri nets. As synthetic biology application we propose the fully automated generation of synthetic or synthetically rewired network models by composition of metadataguided automatically modified modules representing altered protein binding sites. Petri nets composed from modules can be executed as ODE system, stochastic, hybrid, or merely qualitative models and exported in SMBL format.
Z2 Gauge Neural Network and its Phase Structure ; We study general phase structures of neuralnetwork models that have Z2 local gauge symmetry. The Z2 spin variable Si pm1 on the ith site describes a neuron state as in the Hopfield model, and the Z2 gauge variable Jij pm1 describes a state of the synaptic connection between jth and ith neurons. The gauge symmetry allows for a selfcoupling energy among Jij's such as JijJjkJki, which describes reverberation of signals. Explicitly, we consider the three models; I annealed model with full and partial connections of Jij, II quenched model with full connections where Jij is treated as a slow quenched variable, and III quenched threedimensional lattice model with the nearestneighbor connections. By numerical simulations, we examine their phase structures paying attention to the effect of reverberation term, and compare them each other and with the annealed 3D lattice model which has been studied beforehand. By noting the dependence of thermodynamic quantities upon the total number of sites and the connectivity among sites, we obtain a coherent interpretation to understand these results. Among other things, we find that the Higgs phase of the annealed model is separated into two stable spinglass phases in the quenched cases II and III.
Active Model Selection ; Classical learning assumes the learner is given a labeled data sample, from which it learns a model. The field of Active Learning deals with the situation where the learner begins not with a training sample, but instead with resources that it can use to obtain information to help identify the optimal model. To better understand this task, this paper presents and analyses the simplified budgeted active model selection version, which captures the pure exploration aspect of many active learning problems in a clean and simple problem formulation. Here the learner can use a fixed budget of model probes where each probe evaluates the specified model on a random indistinguishable instance to identify which of a given set of possible models has the highest expected accuracy. Our goal is a policy that sequentially determines which model to probe next, based on the information observed so far. We present a formal description of this task, and show that it is NPhard in general. We then investigate a number of algorithms for this task, including several existing ones eg, RoundRobin, Interval Estimation, Gittins as well as some novel ones e.g., BiasedRobin, describing first their approximation properties and then their empirical performance on various problem instances. We observe empirically that the simple biasedrobin algorithm significantly outperforms the other algorithms in the case of identical costs and priors.
Finitesample equivalence in statistical models for presenceonly data ; Statistical modeling of presenceonly data has attracted much recent attention in the ecological literature, leading to a proliferation of methods, including the inhomogeneous Poisson process IPP model, maximum entropy Maxent modeling of species distributions and logistic regression models. Several recent articles have shown the close relationships between these methods. We explain why the IPP intensity function is a more natural object of inference in presenceonly studies than occurrence probability which is only defined with reference to quadrat size, and why presenceonly data only allows estimation of relative, and not absolute intensity of species occurrence. All three of the above techniques amount to parametric density estimation under the same exponential family model in the case of the IPP, the fitted density is multiplied by the number of presence records to obtain a fitted intensity. We show that IPP and Maxent give the exact same estimate for this density, but logistic regression in general yields a different estimate in finite samples. When the model is misspecified as it practically always is logistic regression and the IPP may have substantially different asymptotic limits with large data sets. We propose infinitely weighted logistic regression,'' which is exactly equivalent to the IPP in finite samples. Consequently, many alreadyimplemented methods extending logistic regression can also extend the Maxent and IPP models in directly analogous ways using this technique.
String Derived Exophobic SU6xSU2 GUTs ; With the apparent discovery of the Higgs boson, the Standard Model has been confirmed as the theory accounting for all subatomic phenomena. This observation lends further credence to the perturbative unification in Grand Unified Theories GUTs and string theories. The free fermionic formalism yielded fertile ground for the construction of quasirealistic heteroticstring models, which correspond to toroidal Z2xZ2 orbifold compactifications. In this paper we study a new class of heteroticstring models in which the GUT group is SU6xSU2 at the string level. We use our recently developed fishing algorithm to extract an example of a three generation SU6xSU2 GUT model. We explore the phenomenology of the model and show that it contains the required symmetry breaking Higgs representations. We show that the model admits flat directions that produce a Yukawa coupling for a single family. The novel feature of the SU6xSU2 string GUT models is that they produce an additional family universal anomaly free U1symmetry that may remain unbroken below the string scale. The massless spectrum of the model is free of exotic states.
An Analytic RadiativeConvective Model for Planetary Atmospheres ; We present an analytic 1D radiativeconvective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the twostream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric pressuretemperature profile, as well as expressions for the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass 1 worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiativeconvective boundaries, 2 worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiativeconvective boundaries, depending on the strength of sunlight attenuation, and 3 strongly irradiated giant planets including Hot Jupiters, where we explore the conditions under which these worlds acquire detached convective regions in their midtropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.
Astrophysical Model Selection in Gravitational Wave Astronomy ; Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a Hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space based gravitational wave detector. We find that a mission that is able to resolve 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20.
Perturbative unitarity of Higgs derivative interactions ; We study the perturbative unitarity bound given by dimension six derivative interactions consisting of Higgs doublets. These operators emerge from kinetic terms of composite Higgs models or integrating out heavy particles that interact with Higgs doublets. They lead to new phenomena beyond the Standard Model. One of characteristic contributions by derivative interactions appear in vector boson scattering processes. Longitudinal modes of massive vector bosons can be regarded as Nambu Goldstone bosons eaten by each vector field with the equivalence theorem. Since their effects become larger and larger as the collision energy of vector bosons increases, vector boson scattering processes become important in a high energy region around the TeV scale. On the other hand, in such a high energy region, we have to take the unitarity of amplitudes into account. We have obtained the unitarity condition in terms of the parameter included in the effective Lagrangian for one Higgs doublet models. Applying it to some of models, we have found that contributions of derivative interactions are not so large enough to clearly discriminate them from the Standard Model ones. We also study it in two Higgs doublet models. Because they are too complex to obtain the bound in the general effective Lagrangian, we have calculated it in explicit models. These analyses tell us highly model dependence of the perturbative unitarity bounds.
Learning using Local Membership Queries ; We introduce a new model of membership query MQ learning, where the learning algorithm is restricted to query points that are emphclose to random examples drawn from the underlying distribution. The learning model is intermediate between the PAC model Valiant, 1984 and the PACMQ model where the queries are allowed to be arbitrary points. Membership query algorithms are not popular among machine learning practitioners. Apart from the obvious difficulty of adaptively querying labelers, it has also been observed that querying emphunnatural points leads to increased noise from human labelers Lang and Baum, 1992. This motivates our study of learning algorithms that make queries that are close to examples generated from the data distribution. We restrict our attention to functions defined on the ndimensional Boolean hypercube and say that a membership query is local if its Hamming distance from some example in the random training data is at most Ologn. We show the following results in this model i The class of sparse polynomials with coefficients in R over 0,1n is polynomial time learnable under a large class of emphlocally smooth distributions using Olognlocal queries. This class also includes the class of Ologndepth decision trees. ii The class of polynomialsized decision trees is polynomial time learnable under product distributions using Olognlocal queries. iii The class of polynomial size DNF formulas is learnable under the uniform distribution using Olognlocal queries in time nOloglogn. iv In addition we prove a number of results relating the proposed model to the traditional PAC model and the PACMQ model.
Phaseamplitude descriptions of neural oscillator models ; Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strongattraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based MorrisLecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phaseamplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli which are not necessarily weak. We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phasereduction technique. As an explicit application of this phaseamplitude framework, we consider in some detail the response of a generic planar model where the strongattraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
A framework for coupling flow and deformation of the porous solid ; In this paper, we consider the flow of an incompressible fluid in a deformable porous solid. We present a mathematical model using the framework offered by the theory of interacting continua. In its most general form, this framework provides a mechanism for capturing multiphase flow, deformation, chemical reactions and thermal processes, as well as interactions between the various physics in a conveniently implemented fashion. To simplify the presentation of the framework, results are presented for a particular model than can be seen as an extension of Darcy's equation which assumes that the porous solid is rigid that takes into account elastic deformation of the porous solid. The model also considers the effect of deformation on porosity. We show that using this model one can recover identical results as in the framework proposed by Biot and Terzaghi. Some salient features of the framework are as follows a It is a consistent mixture theory model, and adheres to the laws and principles of continuum thermodynamics, b the model is capable of simulating various important phenomena like consolidation and surface subsidence, and c the model is amenable to several extensions. We also present numerical coupling algorithms to obtain coupled flowdeformation response. Several representative numerical examples are presented to illustrate the capability of the mathematical model and the performance of the computational framework.