text
stringlengths
62
2.94k
Scale Dependence of the Halo Bias in General LocalType NonGaussian Models I Analytical Predictions and Consistency Relations ; We investigate the clustering of halos in cosmological models starting with general localtype nonGaussian primordial fluctuations. We employ multiple Gaussian fields and add localtype nonGaussian corrections at arbitrary order to cover a class of models described by frequentlydiscussed fnl, gnl and taunl parameterization. We derive a general formula for the halo power spectrum based on the peakbackground split formalism. The resultant spectrum is characterized by only two parameters responsible for the scaledependent bias at large scale arising from the primordial nonGaussianities in addition to the Gaussian bias factor. We introduce a new inequality for testing nonGaussianities originating from multi fields, which is directly accessible from the observed power spectrum. We show that this inequality is a generalization of the SuyamaYamaguchi inequality between fnl and taunl to the primordial nonGaussianities at arbitrary order. We also show that the amplitude of the scaledependent bias is useful to distinguish the simplest quadratic nonGaussianities i.e., fnltype from higherorder ones gnl and higher, if one measures it from multiple species of galaxies or clusters of galaxies. We discuss the validity and limitations of our analytic results by comparison with numerical simulations in an accompanying paper.
Efficient Protocols for Distributed Classification and Optimization ; In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal expensive communication. Prior work Daume III et al., 2012 proposes a general model that bounds the communication required for learning classifiers while allowing for eps training error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a twoparty multiplicativeweightupdate based protocol that uses Od2 log1eps words of communication to classify distributed data in arbitrary dimension d, epsoptimally. This readily extends to classification over k nodes with Okd2 log1eps words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we illustrate general algorithm design paradigms for doing efficient learning over distributed data. We show how to solve fixeddimensional and high dimensional linear programming efficiently in a distributed setting where constraints may be distributed across nodes. Since many learning problems can be viewed as convex optimization problems where constraints are generated by individual points, this models many typical distributed learning scenarios. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicativeweightupdate framework more generally to a distributed setting. As a consequence, our methods extend to the wide range of problems solvable using these techniques.
Deriving an Accurate Formula of Scaledependent Bias with Primordial NonGaussianity An Application of the Integrated Perturbation Theory ; We apply the integrated perturbation theory Matsubara 2011, PRD 83, 083518 to evaluate the scaledependent bias in the presence of primordial nonGaussianity. The integrated perturbation theory is a general framework of nonlinear perturbation theory, in which a broad class of bias models can be incorporated into perturbative evaluations of biased power spectrum and higherorder polyspectra. Approximations such as the highpeak limit or the peakbackground split are not necessary to derive the scaledependent bias in this framework. Applying the halo approach, previously known formulas are rederived as limiting cases of a general formula in this work, and it is implied that modifications should be made in general situations. Effects of redshiftspace distortions are straightforwardly incorporated. It is found that the slope of the scaledependent bias on large scales is determined only by the behavior of primordial bispectrum in the squeezed limit, and is not sensitive to bias models in general. It is the amplitude of scaledependent bias that is sensitive to the bias models. The effects of redshiftspace distortions turn out to be quite small for the monopole component of the power spectrum, while the quadrupole component is proportional to the monopole component on large scales, and thus also sensitive to the primordial nonGaussianity.
Learning the Structure and Parameters of LargePopulation Graphical Games from Behavioral Data ; We consider learning, from strictly behavioral data, the structure and parameters of linear influence games LIGs, a class of parametric graphical games introduced by Irfan and Ortiz 2014. LIGs facilitate causal strategic inference CSI Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large social networks. Such tasks can also support policymaking analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximumlikelihood estimation MLE of a generative model defined by purestrategy Nash equilibria PSNE. Our simple formulation uncovers the fundamental interplay between goodnessoffit and model complexity good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization CLM and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and realworld U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.
Kinetic Theory of Collisionless SelfGravitating Gases II. Relativistic Corrections in Galactic Dynamics ; In this paper we study the kinetic theory of manyparticle astrophysical systems imposing axial symmetry and extending our previous analysis in Phys. Rev. D 83, 123007 2011. Starting from a Newtonian model describing a collisionless selfgravitating gas, we develop a framework to include systematically the first general relativistic corrections to the matter distribution and gravitational potentials for general stationary systems. Then, we use our method to obtain particular solutions for the case of the Morgan Morgan disks. The models obtained are fully analytical and correspond to the postNewtonian generalizations of classical ones. We explore some properties of the models in order to estimate the importance of postNewtonian corrections and we find that, contrary to the expectations, the main modifications appear far from the galaxy cores. As a byproduct of this investigation we derive the corrected version of the tensor virial theorem. For stationary systems we recover the same result as in the Newtonian theory. However, for time dependent backgrounds we find that there is an extra piece that contributes to the variation of the inertia tensor.
Oneloop divergences in the Galileon model ; The investigation of UV divergences is a relevant step in better understanding of a new theory. In this work the oneloop divergences in the free field sector are obtained for the popular Galileons model. The calculations are performed by the generalized SchwingerDeWitt technique and also by means of Feynman diagrams. The first method can be directly generalized to curved space, but here we deal only with the flatspace limit. We show that the UV completion of the theory includes the pi Box4pi term. According to our previous analysis in the case of quantum gravity, this means that the theory can be modified to become superrenormalizable, but then its physical spectrum includes two massive ghosts and one massive scalar with positive kinetic energy. The effective approach in this theory can be perfectly successful, exactly as in the higher derivative quantum gravity, and in this case the nonrenormalization theorem for Galileons remains valid in the lowenergy region.
Spontaneous Generation of a Crystalline Ground State in a Higher Derivative Theory ; The possibility of Spontaneous Symmetry Breaking in momentum space in a generic Lifshitz scalar model a nonrelativistic scalar field theory with higher spatial derivative terms has been studied. We show that the minimum energy state, the ground state, has a lattice structure, where the translation invariance of the continuum theory is reduced to a discrete translation symmetry. The scale of translation symmetry breaking or induced lattice spacing is proportional to the inverse of the momentum of the condensate particle. The crystalline ground state is stable under excitations below a certain critical velocity. The small fluctuations above the ground state can have a phonon like dispersion under suitable choice of parameters. At the beginning we have discussed the effects of next to nearest neighbour interaction terms in a model of linear triatomic molecule depicted by a linear system of three particles of same mass connected by identical springs. This model is relevant since in the continuum limit the next to nearest neighbour interaction terms generate higher spatial derivative wave equation, the main topic of this paper.
Unified Field Equations Coupling Four Forces and Principle of Interaction Dynamics ; The main objective of this article is to postulate a principle of interaction dynamics PID and to derive unified field equations coupling the four fundamental interactions based on first principles. PID is a least action principle subject to divAfree constraints for the variational element with A being gauge potentials. The Lagrangian action is uniquely determined by 1 the principle of general relativity, 2 the U1, SU2 and SU3 gauge invariances, 3 the Lorentz invariance, and 4 principle of principle of representation invariance PRI, introduced in 11. The unified field equations are then derived using PID. The unified field model spontaneously breaks the gauge symmetries, and gives rise to a new mechanism for energy and mass generation. The unified field model introduces a natural duality between the mediators and their dual mediators, and can be easily decoupled to study each individual interaction when other interactions are negligible. The unified field model, together with PRI and PID applied to individual interactions, provides clear explanations and solutions to a number of outstanding challenges in physics and cosmology, including e.g. the dark energy and dark matter phenomena, the quark confinement, asymptotic freedom, shortrange nature of both strong and weak interactions, decay mechanism of subatomic particles, baryon asymmetry, and the solar neutrino problem.
A family of well behaved charge analogues of Durgapal's perfect fluid exact solution in general relativity ; This paper presents a new family of interior solutions of EinsteinMaxwell field equations in general relativity for a static spherically symmetric distribution of a charged perfect fluid with a particular form of charge distribution. This solution gives us wide range of parameter, K, for which the solution is well behaved hence, suitable for modeling of superdense star. For this solution the gravitational mass of a star is maximized with all degree of suitability by assuming the surface density equal to normal nuclear density, 2.5E17 kgm3. By this model we obtain the mass of the Crab pulsar, MCrab 1.3679 solar mass and radius 13.21 km, constraining the moment of inertia 1.61E38 kg m2 for the conservative estimate of Crab nebula of 2 solar mass . And MCrab 1.9645 solar mass with radius 14.38 km constraining the moment of inertia 3.04E38 kg m2 for the newest estimate of Crab nebula mass, 4.6 solar mass. These results are quite well in agreement with the possible values of mass and radius of Crab pulsar. Besides this, our model yields moments of inertia for PSR J07373039A and PSR J07373039B, IA 1.4285E38 kg m2 and IB1.3647E38 kg m2 respectively. It has been observed that under well behaved conditions this class of solutions gives us the overall maximum gravitational mass of super dense object, Mmax, 4.7487 solar mass with radius RMmax 15.24 km, surface redshift 0.9878, charge 7.91E20 C, and central density 4.31 times nuclear density.
Supervised Learning with Similarity Functions ; We address the problem of general supervised learning when data can only be accessed through an indefinite similarity function between data points. Existing work on learning with indefinite kernels has concentrated solely on binarymulticlass classification problems. We propose a model that is generic enough to handle any supervised learning task and also subsumes the model previously proposed for classification. We give a goodness criterion for similarity functions w.r.t. a given supervised learning task and then adapt a wellknown landmarking technique to provide efficient algorithms for supervised learning using good similarity functions. We demonstrate the effectiveness of our model on three important supervised learning problems a realvalued regression, b ordinal regression and c ranking where we show that our method guarantees bounded generalization error. Furthermore, for the case of realvalued regression, we give a natural goodness definition that, when used in conjunction with a recent result in sparse vector recovery, guarantees a sparse predictor with bounded generalization error. Finally, we report results of our learning algorithms on regression and ordinal regression tasks using nonPSD similarity functions and demonstrate the effectiveness of our algorithms, especially that of the sparse landmark selection algorithm that achieves significantly higher accuracies than the baseline methods while offering reduced computational costs.
A probabilistic numerical method for optimal multiple switching problem and application to investments in electricity generation ; In this paper, we present a probabilistic numerical algorithm combining dynamic programming, Monte Carlo simulations and local basis regressions to solve nonstationary optimal multiple switching problems in infinite horizon. We provide the rate of convergence of the method in terms of the time step used to discretize the problem, of the size of the local hypercubes involved in the regressions, and of the truncating time horizon. To make the method viable for problems in high dimension and long time horizon, we extend a memory reduction method to the general Euler scheme, so that, when performing the numerical resolution, the storage of the Monte Carlo simulation paths is not needed. Then, we apply this algorithm to a model of optimal investment in power plants. This model takes into account electricity demand, cointegrated fuel prices, carbon price and random outages of power plants. It computes the optimal level of investment in each generation technology, considered as a whole, w.r.t. the electricity spot price. This electricity price is itself built according to a new extended structural model. In particular, it is a function of several factors, among which the installed capacities. The evolution of the optimal generation mix is illustrated on a realistic numerical problem in dimension eight, i.e. with two different technologies and six random factors.
Spontaneous symmetry breaking in active droplets provides a generic route to motility ; We explore a generic mechanism whereby a droplet of active matter acquires motility by the spontaneous breakdown of a discrete symmetry. The model we study offers a simple representation of a cell extract comprising, e.g., a droplet of actomyosin solution. Such extracts are used experimentally to model the cytoskeleton. Actomyosin is an active gel whose polarity describes the mean sense of alignment of actin fibres. In the absence of polymerization and depolymerization processes 'treadmilling', the gel's dynamics arises solely from the contractile motion of myosin motors; this should be unchanged when polarity is inverted. Our results suggest that motility can arise in the absence of treadmilling, by spontaneous symmetry breaking SSB of polarity inversion symmetry. Adapting our model to wallbound cells in two dimensions, we find that as wall friction is reduced, treadmillinginduced motility falls but SSBmediated motility rises. The latter might therefore be crucial in three dimensions where frictional forces are likely to be modest. At a supracellular level, the same generic mechanism can impart motility to aggregates of nonmotile but active bacteria; we show that SSB in this extensile case leads generically to rotational as well as translational motion.
IRIS A Generic ThreeDimensional Radiative Transfer Code ; We present IRIS, a new generic threedimensional 3D spectral radiative transfer code that generates synthetic spectra, or images. It can be used as a diagnostic tool for comparison with astrophysical observations or laboratory astrophysics experiments. We have developed a 3D shortcharacteristic solver that works with a 3D nonuniform Cartesian grid. We have implemented a piecewise cubic, locally monotonic, interpolation technique that dramatically reduces the numerical diffusion effect. The code takes into account the velocity gradient effect resulting in gradual Doppler shifts of photon frequencies and subsequent alterations of spectral line profiles. It can also handle periodic boundary conditions. This first version of the code assumes Local Thermodynamic Equilibrium LTE and no scattering. The opacities and source functions are specified by the user. In the near future, the capabilities of IRIS will be extended to allow for nonLTE and scattering modeling. IRIS has been validated through a number of tests. We provide the results for the most relevant ones, in particular a searchlight beam test, a comparison with a 1D planeparallel model, and a test of the velocity gradient effect. IRIS is a generic code to address a wide variety of astrophysical issues applied to different objects or structures, such as accretion shocks, jets in young stellar objects, stellar atmospheres, exoplanet atmospheres, accretion disks, rotating stellar winds, cosmological structures. It can also be applied to model laboratory astrophysics experiments, such as radiative shocks produced with high power lasers.
The BesselPlancherel theorem and applications ; Let G be a simple Lie Group with finite center, and let Ksubset G be a maximal compact subgroup. We say that G is a Lie group of tube type if GK is a hermitian symmetric space of tube type. For such a Lie group G, we can find a parabolic subgroup PMAN, with given Langlands decomposition, such that N is abelian, and N admits a generic character with compact stabilizer. We will call any parabolic subgroup P satisfying this properties a Siegel parabolic. Let pi,V be an admissible, smooth, Fr'echet representation of a Lie group of tube type G, and let P subset G be a Siegel parabolic subgroup. If chi is a generic character of N, let WhchiVlambdaV longrightarrow mathbbC lambdapinvchinv be the space of Bessel models of V. After describing the classification of all the simple Lie groups of tube type, we will give a characterization of the space of Bessel models of an induced representation. As a corollary of this characterization we obtain a local multiplicity one theorem for the space of Bessel models of an irreducible representation of G. As an application of this results we calculate the BesselPlancherel measure of a Lie group of tube type, L2Nbackslash G;chi, where chi is a generic character of N. Then we use Howe's theory of dual pairs to show that the Plancherel measure of the space L2Opr,qsbackslash Op,q is the pullback, under the Theta lift, of the BesselPlancherel measure L2Nbackslash Spm,mathbbR;chi, where mrs and chi is a generic character that depends on r and s.
Robust Adaptive Beamforming for GeneralRank Signal Model with Positive SemiDefinite Constraint via POTDC ; The robust adaptive beamforming RAB problem for generalrank signal model with an additional positive semidefinite constraint is considered. Using the principle of the worstcase performance optimization, such RAB problem leads to a differenceofconvex functions DC optimization problem. The existing approaches for solving the resulted nonconvex DC problem are based on approximations and find only suboptimal solutions. Here we solve the nonconvex DC problem rigorously and give arguments suggesting that the solution is globally optimal. Particularly, we rewrite the problem as the minimization of a onedimensional optimal value function whose corresponding optimization problem is nonconvex. Then, the optimal value function is replaced with another equivalent one, for which the corresponding optimization problem is convex. The new onedimensional optimal value function is minimized iteratively via polynomial time DC POTDC algorithm.We show that our solution satisfies the KarushKuhnTucker KKT optimality conditions and there is a strong evidence that such solution is also globally optimal. Towards this conclusion, we conjecture that the new optimal value function is a convex function. The new RAB method shows superior performance compared to the other stateoftheart generalrank RAB methods.
Gammaray lines and OneLoop Continuum from schannel Dark Matter Annihilations ; The era of indirect detection searches for dark matter has begun, with the sensitivities of gammaray detectors now approaching the parameter space relevant for weakly interacting massive particles. In particular, gamma ray lines would be smoking gun signatures of dark matter annihilation, although they are typically suppressed compared to the continuum. In this paper, we pay particular attention to the 1loop continuum generated together with the gammaray lines and investigate under which conditions a dark matter model can naturally lead to a line signal that is relatively enhanced. We study generic classes of models in which DM is a fermion that annihilates through an schannel mediator which is either a vector or scalar and identify the coupling and mass conditions under which large line signals occur. We focus on the forbidden channel mechanism advocated a few years ago in the Higgs in space scenario for which tree level annihilation is kinematically forbidden today. Detailed calculations of all 1loop annihilation channels are provided. We single out very simple models with a large line over continuum ratio and present general predictions for a large range of WIMP masses that are relevant not only for Fermi and Hess II but also for the next generation of telescopes such as CTA and Gamma400. Constraints from the relic abundance, direct detection and collider bounds are also discussed.
Average Rate of Downlink Heterogeneous Cellular Networks over Generalized Fading Channels A Stochastic Geometry Approach ; In this paper, we introduce an analytical framework to compute the average rate of downlink heterogeneous cellular networks. The framework leverages recent application of stochastic geometry to othercell interference modeling and analysis. The heterogeneous cellular network is modeled as the superposition of many tiers of Base Stations BSs having different transmit power, density, pathloss exponent, fading parameters and distribution, and unequal biasing for flexible tier association. A longterm averaged maximum biasedreceivedpower tier association is considered. The positions of the BSs in each tier are modeled as points of an independent Poisson Point Process PPP. Under these assumptions, we introduce a new analytical methodology to evaluate the average rate, which avoids the computation of the Coverage Probability Pcov and needs only the Moment Generating Function MGF of the aggregate interference at the probe mobile terminal. The distinguishable characteristic of our analytical methodology consists in providing a tractable and numerically efficient framework that is applicable to general fading distributions, including composite fading channels with small and midscale fluctuations. In addition, our method can efficiently handle correlated LogNormal shadowing with little increase of the computational complexity. The proposed MGFbased approach needs the computation of either a single or a twofold numerical integral, thus reducing the complexity of Pcovbased frameworks, which require, for general fading distributions, the computation of a fourfold integral.
Simplifying Generalized Belief Propagation on Redundant Region Graphs ; The cluster variation method has been developed into a general theoretical framework for treating shortrange correlations in manybody systems after it was first proposed by Kikuchi in 1951. On the numerical side, a messagepassing approach called generalized belief propagation GBP was proposed by Yedidia, Freeman and Weiss about a decade ago as a way of computing the minimal value of the cluster variational free energy and the marginal distributions of clusters of variables. However the GBP equations are often redundant, and it is quite a nontrivial task to make the GBP iteration converges to a fixed point. These drawbacks hinder the application of the GBP approach to finitedimensional frustrated and disordered systems. In this work we report an alternative and simple derivation of the GBP equations starting from the partition function expression. Based on this derivation we propose a natural and systematic way of removing the redundance of the GBP equations. We apply the simplified generalized belief propagation SGBP equations to the twodimensional and the threedimensional ferromagnetic Ising model and EdwardsAnderson spin glass model. The numerical results confirm that the SGBP messagepassing approach is able to achieve satisfactory performance on these model systems. We also suggest that a subset of the SGBP equations can be neglected in the numerical iteration process without affecting the final results.
Light top partners and precision physics ; We analyze the corrections to the precision EW observables in minimal composite Higgs models by using a general effective parametrization which also includes the lightest fermionic resonances. A new, possibly large, logarithmically divergent contribution to S is identified, which comes purely from the strong dynamics. It can be interpreted as a running of S induced by the nonrenormalizable Higgs interactions due to the nonlinear sigmamodel structure. As expected, the corrections to the T parameter coming from fermion loops are finite and dominated by the contributions of the lightest composite states. The fit of the oblique parameters suggests a rather stringent lower bound on the sigmamodel scale f 750GeV. The corrections to the Z bL bL vertex coming from the lowestorder operators in the effective Lagrangian are finite and somewhat correlated to the corrections to T. Large additional contributions are generated by contact interactions with 4 composite fermions. In this case a logarithmic divergence can be generated and the correlation with T is removed. We also analyze the treelevel corrections to the top couplings, which are expected to be large due to the sizable degree of compositeness of the third generation quarks. We find that for a moderate amount of tuning the deviation in Vtb can be of order 5 while the distortion of the Z tL tL vertex can be 10.
Implementation of a simplified approach to radiative transfer in general relativity ; We describe in detail the implementation of a simplified approach to radiative transfer in general relativity by means of the wellknown neutrino leakage scheme NLS. In particular, we carry out an extensive investigation of the properties and limitations of the NLS for isolated relativistic stars to a level of detail that has not been discussed before in a generalrelativistic context. Although the numerous tests considered here are rather idealized, they provide a wellcontrolled environment in which to understand the relationship between the matter dynamics and the neutrino emission, which is important in order to model the neutrino signals from more complicated scenarios, such as binary neutronstar mergers. When considering nonrotating hot neutron stars we confirm earlier results of onedimensional simulations, but also present novel results about the equilibrium properties and on how the cooling affects the stability of these configurations. In our idealized but controlled setup, we can then show that deviations from the thermal and weakinteraction equilibrium affect the stability of these models to radial perturbations, leading models that are stable in the absence of radiative losses, to a gravitational collapse to a black hole when neutrinos are instead radiated.
Universe Models with Negative Bulk Viscosity ; The concept of negative temperatures has occasionally been used in connection with quantum systems. A recent example of this sort is reported in the paper of S. Braun et al. Science 339,52 2013, where an attractively interacting ensemble of ultracold atoms is investigated experimentally and found to correspond to a negativetemperature system since the entropy decreases with increasing energy at the high end of the energy spectrum. As the authors suggest, it would be of interest to investigate whether a suitable generalization of standard cosmological theory could be helpful, in order to elucidate the observed accelerated expansion of the universe usually explained in terms of a positive tensile stress negative pressure. In the present note we take up this basic idea and investigate a generalization of the standard viscous cosmological theory, not by admitting negative temperatures but instead by letting the bulk viscosity take negative values. Evidently, such an approach breaks standard thermodynamics, but may actually be regarded to lead to the same kind of bizarre consequences as the standard approach of admitting the equationofstate parameter w to be less than 1. In universe models dominated by negative viscosity we find that the fluid's entropy decreases with time, as one would expect. Moreover, we find that the fluid transition from the quintessence region into the phantom region thus passing the phantom divide w1 can actually be reversed. Also in generalizations of the LCDMuniverse models with a fluid having negative bulk viscosity we find that the viscosity decreases the expansion of the universe.
On the Brittleness of Bayesian Inference ; With the advent of highperformance computing, Bayesian methods are increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question for which there currently exist positive and negative results. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the datagenerating distribution are unknown, they could be generically brittle when applied to continuous systems and their discretizations with finite information on the datagenerating distribution. If closeness is defined in terms of the total variation metric or the matching of a finite system of generalized moments, then 1 two practitioners who use arbitrarily close models and observe the same possibly arbitrarily large amount of data may reach opposite conclusions; and 2 any given prior and model can be slightly perturbed to achieve any desired posterior conclusions. The mechanism causing brittlenssrobustness suggests that learning and robustness are antagonistic requirements and raises the question of a missing stability condition for using Bayesian Inference in a continuous world under finite information.
Exact SpaceTime Gauge Symmetry of Gravity, Its Couplings and Approximate Internal Symmetries in a TotalUnified Model ; Gravitational field is the manifestation of spacetime translational T4 gauge symmetry, which enables gravitational interaction to be unified with the strong and the electroweak interactions. Such a totalunified model is based on a generalized YangMills framework in flat spacetime. Following the idea of GlashowSalamWardWeinberg, we gauge the groups T4 times SU3color times SU2 times U1times U1b on equalfooting, so that we have the totalunified gauge covariant derivative bf dmu pmu igphimunu pnuigsGmualda2 ifWmuktk if' Umuto igbBmu. The generators of the external T4 group have the representation pmuipmu, which differs from other generators of all internal groups, which have constant matrix representations. Consequently, the totalunified model leads to the following new results a All internal SU3color, SU2, U1 and baryonic U1b gauge symmetries have extremely small violations due to the gravitational interaction. b The T4 gauge symmetry remains exact and dictates the universal coupling of gravitons. c Such a gravitational violation of internal gauge symmetries leads to modified eikonal and HamiltonJacobi type equations, which are obtained in the geometricoptics limit and involve effective Riemann metric tensors. d The rules for Feynman diagrams involving new couplings of photongraviton, gluongraviton and quarkgaviton are obtained.
Anisotropic powerlaw kinflation ; It is known that powerlaw kinflation can be realized for the Lagrangian PXgY, where Xpartial phi22 is the kinetic energy of a scalar field phi and g is an arbitrary function in terms of YXelambda phiMpl lambda is a constant and Mpl is the reduced Planck mass. In the presence of a vector field coupled to the inflaton with an exponential coupling fphi propto emu phiMpl, we show that the models with the Lagrangian PXgY generally give rise to anisotropic inflationary solutions with SigmaHconstant, where Sigma is an anisotropic shear and H is an isotropic expansion rate. Provided these anisotropic solutions exist in the regime where the ratio SigmaH is much smaller than 1, they are stable attractors irrespective of the forms of gY. We apply our results to concrete models of kinflation such as the generalized dilatonic ghost condensatethe DBI model and we numerically show that the solutions with different initial conditions converge to the anisotropic powerlaw inflationary attractors. Even in the de Sitter limit lambda to 0 such solutions can exist, but in this case the null energy condition is generally violated. The latter property is consistent with the Wald's cosmic conjecture stating that the anisotropic hair does not survive on the de Sitter background in the presence of matter respecting the dominantstrong energy conditions.
A Complete Method of Comparative Statics for Optimization Problems Unabbreviated Version ; A new method of deriving comparative statics information using generalized compensated derivatives is presented which yields constraintfree semidefiniteness results for any differentiable, constrained optimization problem. More generally, it applies to any differentiable system governed by an extremum principle, be it a physical system subject to the minimum action principle, the equilibrium point of a game theoretical problem expressible as an extremum, or a problem of decision theory with incomplete information treated by the maximum entropy principle. The method of generalized compensated derivatives is natural and powerful, and its underlying structure has a simple and intuitively appealing geometric interpretation. Several extensions of the main theorem such as envelope relations, symmetry properties and invariance conditions, transformations of decision variables and parameters, degrees of arbitrariness in the choice of comparative statics results, and rank relations and inequalities are developed. The relationship of the new method to existing formulations is established, thereby providing a unification of the main differential comparative statics methods currently in use. A second theorem is also established which yields exhaustive, constraintfree comparative statics results for a general, constrained optimization problem. This theorem subsumes all other comparative statics formulations. The method is illustrated with a variety of models, some well known, such as profit and utility maximization, where several novel extensions and results are derived, and some new, such as the principalagent problem, the efficient portfolio problem, a model of a consumer with market power, and a costconstrained profit maximization model.
Radiative Generation of the Lepton Mass ; We propose a new mechanism where both Dirac masses for the chargedleptons and Majorana masses for neutrinos are generated via quantum levels. The chargedlepton masses are given by the vacuum expectation values VEVs of the Higgs doublet field and that of a triplet field. On the other hand, neutrino masses are generated by two VEVs of triplet Higgs fields. As a result, the hierarchy between the masses for chargedleptons and neutrinos can be explained by the triplet VEVs which have to be much smaller than the doublet VEV due to the constraint from the electroweak rho parameter. We construct a concrete model to realize this mechanism with discrete mathbbZ2 and mathbbZ4 symmetries, in which masses for neutrinos and those for the muon and electron are generated at the oneloop level. As a bonus in our model, the deviation in the measured muon g2 from the standard model prediction can be explained by contributions of extra particle loops. Besides, the lightest mathbbZ2odd neutral particle can be a dark matter candidate. The collider phenomenology is also discussed, especially focusing on doublycharged scalar bosons which are necessary to introduce to occur our mechanism.
Cosmological perturbations and structure formation in nonlocal infrared modifications of general relativity ; We study the cosmological consequences of a recently proposed nonlocal modification of general relativity, obtained by adding a term m2R,Box2R to the EinsteinHilbert action. The model has the same number of parameters as LambdaCDM, with m replacing OmegaLambda, and is very predictive. At the background level, after fixing m so as to reproduce the observed value of OmegaM, we get a pure prediction for the equation of state of dark energy as a function of redshift, wrm DEz, with wrm DE0 in the range 1.165,1.135 as OmegaM varies over the broad range OmegaMin 0.20,0.36. We find that the cosmological perturbations are wellbehaved, and the model fully fixes the dark energy perturbations as a function of redshift z and wavenumber k. The nonlocal model provides a good fit to supernova data and predicts deviations from General Relativity in structure formation and in weak lensing at the level of 34, therefore consistent with existing data but readily detectable by future surveys. For the logarithmic growth factor we obtain gammasimeq 0.53, to be compared with gammasimeq 0.55 in LambdaCDM. For the Newtonian potential on subhorizon scales our results are well fitted by Psia;k1mus asPsirm GRa;k with a scaleindependent mussimeq 0.09 and ssimeq 2, while the anisotropic stress is negligibly small.
Performance of Multiantenna Linear MMSE Receivers in Doubly Stochastic Networks ; A technique is presented to characterize the SignaltoInterferenceplusNoise Ratio SINR of a representative link with a multiantenna linear MinimumMeanSquareError receiver in a wireless network with transmitting nodes distributed according to a doubly stochastic process, which is a generalization of the Poisson point process. The cumulative distribution function of the SINR of the representative link is derived assuming independent Rayleigh fading between antennas. Several representative spatial node distributions are considered, including networks with both deterministic and random clusters, strip networks used to model roadways, e.g., hardcore networks and networks with generalized pathloss models. In addition, it is shown that if the number of antennas at the representative receiver is increased linearly with the nominal node density, the signaltointerference ratio converges in distribution to a random variable that is nonzero in general, and a positive constant in certain cases. This result indicates that to the extent that the system assumptions hold, it is possible to scale such networks by increasing the number of receiver antennas linearly with the node density. The results presented here are useful in characterizing the performance of multiantenna wireless networks in more general network models than what is currently available.
Fivedimensional generalized fR gravity with curvaturematter coupling ; The generalized fR gravity with curvaturematter coupling in fivedimensional 5D spacetime can be established by assuming a hypersurfaceorthogonal spacelike Killing vector field of 5D spacetime, and it can be reduced to the 4D formulism of FRW universe. This theory is quite general and can give the corresponding results to the Einstein gravity, fR gravity with both nocoupling and nonminimal coupling in 5D spacetime as special cases, that is, we would give the some new results besides previous ones given by Ref.cite60. Furthermore, in order to get some insight into the effects of this theory on the 4D spacetime, by considering a specific type of models with f1Rf2Ralpha Rm and BLmLmrho, we not only discuss the constraints on the model parameters m, n, but also illustrate the evolutionary trajectories of the scale factor at, the deceleration parameter qt and the scalar field epsilont, phit in the reduced 4D spacetime. The research results show that this type of fR gravity models given by us could explain the current accelerated expansion of our universe without introducing dark energy.
Multiple Populations in Globular Clusters and the Origin of the Oosterhoff Period Groups ; The presence of multiple populations is now wellestablished in most globular clusters in the Milky Way. In light of this progress, here we suggest a new model explaining the origin of the Sandage periodshift and the difference in mean period of type ab RR Lyrae variables between the two Oosterhoff groups. In our models, the instability strip in the metalpoor group II clusters, such as M15, is populated by second generation stars G2 with enhanced helium and CNO abundances, while the RR Lyraes in the relatively metal rich group I clusters like M3 are mostly produced by first generation stars G1 without these enhancements. This population shift within the instability strip with metallicity can create the observed periodshift between the two groups, since both helium and CNO abundances play a role in increasing the period of RR Lyrae variables. The presence of more metalrich clusters having Oosterhoffintermediate characteristics, such as NGC 1851, as well as of most metalrich clusters having RR Lyraes with longest periods group III can also be reproduced, as more heliumrich third and later generations of stars G3 penetrate into the instability strip with further increase in metallicity. Therefore, for the most general cases, our models predict that the RR Lyraes are produced mostly by G1, G2, and G3, respectively, for the Oosterhoff groups I, II, and III.
Radiative Generation of Lepton Masses with the U1' Gauge Symmetry ; We revisit our previous model proposed in Ref. citeOkada2013iba, in which lepton masses except the tauon mass are generated at the oneloop level in a TeV scale physics. Although in the previous work, rather large Yukawa couplings constants; i.e., greater than about 3, are required to reproduce the muon mass, we do not need to introduce such a large but cal O1 couplings. In our model, masses for neutrinos chargedleptons are generated by a dimension five effective operator with two isospin triplet singlet and doublet scalar fields. Thus, the mass hierarchy between neutrinos and chargedleptons can be naturally described by the difference in the number of vacuum expectation values VEVs of the triplet fields which must be much smaller than the VEV of the doublet field due to the constraint from the electroweak rho parameter. Furthermore, the discrepancy in the measured muon anomalous magnetic moment g2 from the prediction in the standard model are explained by oneloop contributions from vectorlike extra chargedleptons which are necessary to obtain the radiative generation of the lepton masses. We study the decay property of the extra leptons by taking into account the masses of muon, neutrinos, muon g2 and dark matter physics. We find that the extra leptons can mainly decay into the monomuon, dark matter with or without Z bosons in the favored parameter regions.
MGF Approach to the Analysis of Generalized TwoRay Fading Models ; We analyze a class of Generalized TwoRay GTR fading channels that consist of two line of sight LOS components with random phase plus a diffuse component. We derive a closed form expression for the moment generating function MGF of the signaltonoise ratio SNR for this model, which greatly simplifies its analysis. This expression arises from the observation that the GTR fading model can be expressed in terms of a conditional underlying Rician distribution. We illustrate the approach to derive simple expressions for statistics and performance metrics of interest such as the amount of fading, the level crossing rate, the symbol error rate, and the ergodic capacity in GTR fading channels. We also show that the effect of considering a more general distribution for the phase difference between the LOS components has an impact on the average SNR.
Causality Networks ; While correlation measures are used to discern statistical relationships between observed variables in almost all branches of datadriven scientific inquiry, what we are really interested in is the existence of causal dependence. Designing an efficient causality test, that may be carried out in the absence of restrictive presuppositions on the underlying dynamical structure of the data at hand, is nontrivial. Nevertheless, ability to computationally infer statistical prima facie evidence of causal dependence may yield a far more discriminative tool for data analysis compared to the calculation of simple correlations. In the present work, we present a new nonparametric test of Granger causality for quantized or symbolic data streams generated by ergodic stationary sources. In contrast to stateofart binary tests, our approach makes precise and computes the degree of causal dependence between data streams, without making any restrictive assumptions, linearity or otherwise. Additionally, without any a priori imposition of specific dynamical structure, we infer explicit generative models of causal crossdependence, which may be then used for prediction. These explicit models are represented as generalized probabilistic automata, referred to crossed automata, and are shown to be sufficient to capture a fairly general class of causal dependence. The proposed algorithms are computationally efficient in the PAC sense; i.e., we find good models of crossdependence with high probability, with polynomial runtimes and sample complexities. The theoretical results are applied to weekly searchfrequency data from Google Trends API for a chosen set of socially charged keywords. The causality network inferred from this dataset reveals, quite expectedly, the causal importance of certain keywords. It is also illustrated that correlation analysis fails to gather such insight.
Conic MultiTask Classification ; Traditionally, Multitask Learning MTL models optimize the average of taskrelated objective functions, which is an intuitive approach and which we will be referring to as Average MTL. However, a more general framework, referred to as Conic MTL, can be formulated by considering conic combinations of the objective functions instead; in this framework, Average MTL arises as a special case, when all combination coefficients equal 1. Although the advantage of Conic MTL over Average MTL has been shown experimentally in previous works, no theoretical justification has been provided to date. In this paper, we derive a generalization bound for the Conic MTL method, and demonstrate that the tightest bound is not necessarily achieved, when all combination coefficients equal 1; hence, Average MTL may not always be the optimal choice, and it is important to consider Conic MTL. As a byproduct of the generalization bound, it also theoretically explains the good experimental results of previous relevant works. Finally, we propose a new Conic MTL model, whose conic combination coefficients minimize the generalization bound, instead of choosing them heuristically as has been done in previous methods. The rationale and advantage of our model is demonstrated and verified via a series of experiments by comparing with several other methods.
Aspects of Nonlocality in Quantum Field Theory, Quantum Gravity and Cosmology ; This paper contains a collection of essays on nonlocal phenomena in quantum field theory, gravity and cosmology. Mechanisms of nonlocal contributions to the quantum effective action are discussed within the covariant perturbation expansion in field strengths and spacetime curvatures and the nonperturbative method based on the late time asymptotics of the heat kernel. Euclidean version of the SchwingerKeldysh technique for quantum expectation values is presented as a special rule of obtaining the nonlocal effective equations of motion for the mean quantum field from the Euclidean effective action. This rule is applied to a new model of ghost free nonlocal cosmology which can generate the de Sitter stage of cosmological evolution at an arbitrary value of varLambda a model of dark energy with its scale played by the dynamical variable that can be fixed by a kind of a scaling symmetry breaking mechanism. This model is shown to interpolate between the superhorizon phase of gravity theory mediated by a scalar mode and the short distance general relativistic limit in a special frame which is related by a nonlocal conformal transformation to the original metric. The role of compactness and regularity of spacetime in the Euclidean version of the SchwingerKeldysh technique is discussed.
Cosmological Perturbations Vorticity, Isocurvature and Magnetic Fields ; In this paper I review some recent, interlinked, work undertaken using cosmological perturbation theory a powerful technique for modelling inhomogeneities in the Universe. The common theme which underpins these pieces of work is the presence of nonadiabatic pressure, or entropy, perturbations. After a brief introduction covering the standard techniques of describing inhomogeneities in both Newtonian and relativistic cosmology, I discuss the generation of vorticity. As in classical fluid mechanics, vorticity is not present in linearized perturbation theory unless included as an initial condition. Allowing for entropy perturbations, and working to second order in perturbation theory, I show that vorticity is generated, even in the absence of vector perturbations, by purely scalar perturbations, the source term being quadratic in the gradients of first order energy density and isocurvature, or nonadiabatic pressure perturbations. This generalizes Crocco's theorem to a cosmological setting. I then introduce isocurvature perturbations in different models, focusing on the entropy perturbation in standard, concordance cosmology, and in inflationary models involving two scalar fields. As the final topic, I investigate magnetic fields, which are a potential observational consequence of vorticity in the early universe. I briefly review some recent work on including magnetic fields in perturbation theory in a consistent way. I show, using solely analytical techniques, that magnetic fields can be generated by higher order perturbations, albeit too small to provide the entire primordial seed field, in agreement with some numerical studies. I close with a summary and some potential extensions of this work.
On the Magnetic Field of Pulsars with Realistic Neutron Stars Configurations ; We have recently developed a neutron star model fulfilling global and not local charge neutrality, both in the static and in the uniformly rotating cases. The model is described by the coupled EinsteinMaxwellThomas Fermi EMTF equations, in which all fundamental interactions are accounted for in the framework of general relativity and relativistic mean field theory. Uniform rotation is introduced following the Hartle's formalism. We show that the use of realistic parameters of rotating neutron stars obtained from numerical integration of the selfconsistent axisymmetric general relativistic equations of equilibrium leads to values of the magnetic field and radiation efficiency of pulsars very different from estimates based on fiducial parameters assuming a neutron star mass, M 1.4 Msun, radius R 10 km, and moment of inertia, I 1045 g cm2. In addition, we compare and contrast the magnetic field inferred from the traditional Newtonian rotating magnetic dipole model with respect to the one obtained from its general relativistic analog which takes into due account the effect of the finite size of the source. We apply these considerations to the specific highmagnetic field pulsars class and show that, indeed, all these sources can be described as canonical pulsars driven by the rotational energy of the neutron star, and with magnetic fields lower than the quantum critical field for any value of the neutron star mass.
Belief Revision, Minimal Change and Relaxation A General Framework based on Satisfaction Systems, and Applications to Description Logics ; Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a modeltheoretic point of view, by defining revision in an abstract model theory known under the name of satisfaction systems. In this framework, we generalize to any satisfaction systems the characterization of the well known AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. Moreover, we study how to define revision, satisfying the AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, firstorder, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic ALC and its fragments EL and ELext, discuss their properties and provide some illustrative examples.
A novel scheme for rapid parallel parameter estimation of gravitational waves from compact binary coalescences ; We introduce a highlyparallelizable architecture for estimating parameters of compact binary coalescence using gravitationalwave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters e.g. masses with coefficients that depend on the observer dependent extrinsic parameters e.g. distance, sky position. The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitationalwave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard timedomain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood evidence with statistical errors of less than about 5, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearlyunchanged strategy could estimate NSNS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing timedomain model at any mass, including some waveforms which are computationally costly to evolve.
Fractality a la carte a general particlecluster aggregation model ; Aggregation phenomena are ubiquitous in nature, encompassing outofequilibrium processes of fractal pattern formation, important in many areas of science and technology. Despite their simplicity, foundational models such as diffusionlimited aggregation DLA or ballistic aggregation BA, have contributed to reveal the most basic mechanisms that give origin to fractal structures. Hitherto, it has been commonly accepted that, in the absence of longrange particlecluster interactions, the trajectories of aggregating particles, carrying the entropic information of the growing medium, are the main elements of the aggregation dynamics that determine the fractality and morphology of the aggregates. However, when interactions are not negligible, fractality is enhanced or emerges from the screening effects generated by the aggregated particles, a fact that has led to believe that the main contribution to fractality and morphology is of an energetic character only, turning the entropic one of no special significance, to be considered just as an intrinsic stochastic element. Here we show that, even when longrange attractive interactions are considered, not only screening effects but also, in a very significant manner, particle trajectories themselves are the two fundamental ingredients that give rise to the fractality in aggregates. We found that, while the local morphology of the aggregates is determined by the interactions, their global aspect will exclusively depend on the particle trajectories. Thus, by considering an effective aggregation range, we obtain a wide and versatile generalization of the DLA and BA models. Furthermore, for the first time, we show how to generate a vast richness of naturallooking branching clusters with any prescribed fractal dimension, very precisely controlled.
Constraining fR gravity by the Large Scale Structure ; Over the past decades, General Relativity and the concordance LambdaCDM model have been successfully tested using several different astrophysical and cosmological probes based on large datasets it precision cosmology. Despite their successes, some shortcomings emerge due to the fact that General Relativity should be revised at infrared and ultraviolet limits and to the fact that the fundamental nature of Dark Matter and Dark Energy is still a puzzle to be solved. In this perspective, fR gravity have been extensively investigated being the most straightforward way to modify General Relativity and to overcame some of the above shortcomings. In this paper, we review various aspects of fR gravity at extragalactic and cosmological levels. In particular, we consider cluster of galaxies, cosmological perturbations, and NBody simulations, focusing on those models that satisfy both cosmological and local gravity constraints. The perspective is that some classes of fR models can be consistently constrained by Large Scale Structure.
Consistency relations for sharp features in the primordial spectra ; We study the generation of sharp features in the primordial spectra within the framework of effective field theory of inflation, wherein curvature perturbations are the consequence of the dynamics of a single scalar degree of freedom. We identify two sources in the generation of features rapid variations of the sound speed cs at which curvature fluctuations propagate and rapid variations of the expansion rate H during inflation. With this in mind, we propose a nontrivial relation linking these two quantities that allows us to study the generation of sharp features in realistic scenarios where features are the result of the simultaneous occurrence of these two sources. This relation depends on a single parameter with a value determined by the particular model and its numerical input responsible for the rapidly varying background. As a consequence, we find a oneparameter consistency relation between the shape and size of features in the bispectrum and features in the power spectrum. To substantiate this result, we discuss several examples of models for which this oneparameter relation between cs and H holds, including models in which features in the spectra are both sudden and resonant.
Neuronal coupling by endogenous electric fields Cable theory and applications to coincidence detector neurons in the auditory brainstem ; The ongoing activity of neurons generates a spatially and timevarying field of extracellular voltage Ve. This Ve field reflects populationlevel neural activity, but does it modulate neural dynamics and the function of neural circuits We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential Vm. We find that these ephaptic interactions are small but not negligible. The model neural population can generate Ve with millivoltscale amplitude and this Ve perturbs the Vm of nearby cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case the medial superior olive MSO in the auditory brainstem. The MSO is a possible locus of ephaptic interactions sounds evoke large Ve in vivo in this nucleus millivoltscale. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision coincidence detection to encode sound source location. Using a biophysicallybased model of MSO neurons, we find millivoltscale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function.
General and exact approach to percolation on random graphs ; We present a comprehensive and versatile theoretical framework to study site and bond percolation on clustered and correlated random graphs. Our contribution can be summarized in three main points. i We introduce a set of iterative equations that solve the exact distribution of the size and composition of components in finite size quenched or random multitype graphs. ii We define a very general random graph ensemble that encompasses most of the models published to this day, and also that permits to model structural properties not yet included in a theoretical framework. Site and bond percolation on this ensemble is solved exactly in the infinite size limit using probability generating functions i.e., the percolation threshold, the size and the composition of the giant extensive and small components. Several examples and applications are also provided. iii Our approach can be adapted to model interdependent graphswhose most striking feature is the emergence of an extensive component via a discontinuous phase transitionin an equally general fashion. We show how a graph can successively undergo a continuous then a discontinuous phase transition, and preliminary results suggest that clustering increases the amplitude of the discontinuity at the transition.
SpikeThreshold Variability Originated from SeparatrixCrossing in Neuronal Dynamics ; The threshold voltage for action potential generation is a key regulator of neuronal signal transduction, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the general separatrix' in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuron dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrixcrossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and poststimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic HodgkinHuxley model. The separatrixcrossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.
MetricIndependent Spacetime VolumeForms and Dark EnergyDark Matter Unification ; The method of nonRiemannian metricindependent spacetime volumeforms alternative generallycovariant integration measure densities is applied to construct a modified model of gravity coupled to a single scalar field providing an explicit unification of dark energy as a dynamically generated cosmological constant and dust fluid dark matter flowing along geodesics as an exact sum of two separate terms in the scalar field energymomentum tensor. The fundamental reason for the dark species unification is the presence of a nonRiemannian volumeform in the scalar field action which both triggers the dynamical generation of the cosmological constant as well as gives rise to a hidden nonlinear Noether symmetry underlying the dust dark matter fluid nature. Upon adding appropriate perturbation breaking the hidden dust Noether symmetry we preserve the geodesic flow property of the dark matter while we suggest a way to get growing dark energy in the present universe' epoch free of evolution pathologies. Also, an intrinsic relation between the above modified gravity single scalar field model and a special quadratic purely kinetic kessence model is established as a weakversusstrongcoupling duality.
Sharp ComputationalStatistical Phase Transitions via Oracle Computational Model ; We study the fundamental tradeoffs between computational tractability and statistical accuracy for a general family of hypothesis testing problems with combinatorial structures. Based upon an oracle model of computation, which captures the interactions between algorithms and data, we establish a general lower bound that explicitly connects the minimum testing risk under computational budget constraints with the intrinsic probabilistic and combinatorial structures of statistical problems. This lower bound mirrors the classical statistical lower bound by Le Cam 1986 and allows us to quantify the optimal statistical performance achievable given limited computational budgets in a systematic fashion. Under this unified framework, we sharply characterize the statisticalcomputational phase transition for two testing problems, namely, normal mean detection and sparse principal component detection. For normal mean detection, we consider two combinatorial structures, namely, sparse set and perfect matching. For these problems we identify significant gaps between the optimal statistical accuracy that is achievable under computational tractability constraints and the classical statistical lower bounds. Compared with existing works on computational lower bounds for statistical problems, which consider general polynomialtime algorithms on Turing machines, and rely on computational hardness hypotheses on problems like planted clique detection, we focus on the oracle computational model, which covers a broad range of popular algorithms, and do not rely on unproven hypotheses. Moreover, our result provides an intuitive and concrete interpretation for the intrinsic computational intractability of highdimensional statistical problems. One byproduct of our result is a lower bound for a strict generalization of the matrix permanent problem, which is of independent interest.
Outlier Edge Detection Using Random Graph Generation Models and Applications ; Outliers are samples that are generated by different mechanisms from other normal data samples. Graphs, in particular social network graphs, may contain nodes and edges that are made by scammers, malicious programs or mistakenly by normal users. Detecting outlier nodes and edges is important for data mining and graph analytics. However, previous research in the field has merely focused on detecting outlier nodes. In this article, we study the properties of edges and propose outlier edge detection algorithms using two random graph generation models. We found that the edgeegonetwork, which can be defined as the induced graph that contains two end nodes of an edge, their neighboring nodes and the edges that link these nodes, contains critical information to detect outlier edges. We evaluated the proposed algorithms by injecting outlier edges into some realworld graph data. Experiment results show that the proposed algorithms can effectively detect outlier edges. In particular, the algorithm based on the Preferential Attachment Random Graph Generation model consistently gives good performance regardless of the test graph data. Further more, the proposed algorithms are not limited in the area of outlier edge detection. We demonstrate three different applications that benefit from the proposed algorithms 1 a preprocessing tool that improves the performance of graph clustering algorithms; 2 an outlier node detection algorithm; and 3 a novel noisy data clustering algorithm. These applications show the great potential of the proposed outlier edge detection techniques.
Universal Darwinism as a process of Bayesian inference ; Many of the mathematical frameworks describing natural selection are equivalent to Bayes Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus natural selection serves as a counter example to a widelyheld interpretation that restricts Bayesian Inference to human mental processes including the endeavors of statisticians. As Bayesian inference can always be cast in terms of variational free energy minimization, natural selection can be viewed as comprising two components a generative model of an experiment in the external world environment, and the results of that experiment or the surprise entailed by predicted and actual outcomes of the experiment. Minimization of free energy implies that the implicit measure of surprise experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidencebased knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Open Innovation and Triple Helix Models of Innovation Can Synergy in Innovation Systems Be Measured ; The model of Open Innovations OI can be compared with the Triple Helix of UniversityIndustryGovernment Relations TH as attempts to find surplus value in bringing industrial innovation closer to public RD. Whereas the firm is central in the model of OI, the TH adds multicenteredness in addition to firms, universities and e.g., regional governments can take leading roles in innovation ecosystems. In addition to the transversal technology transfer at each moment of time, one can focus on the dynamics in the feedback loops. Under specifiable conditions, feedback loops can be turned into feedforward ones that drive innovation ecosystems towards selforganization and the autocatalytic generation of new options. The generation of options can be more important than historical realizations best practices for the longerterm viability of knowledgebased innovation systems. A system without sufficient options, for example, is lockedin. The generation of redundancy the Triple Helix indicator can be used as a measure of unrealized but technologically feasible options given a historical configuration. Different coordination mechanisms markets, policies, knowledge provide different perspectives on the same information and thus generate redundancy. Increased redundancy not only stimulates innovation in an ecosystem by reducing the prevailing uncertainty; it also enhances the synergy in and innovativeness of an innovation system.
On Performance Modeling for MANETs under General Limited Buffer Constraint ; Understanding the real achievable performance of mobile ad hoc networks MANETs under practical network constraints is of great importance for their applications in future highly heterogeneous wireless network environments. This paper explores, for the first time, the performance modeling for MANETs under a general limited buffer constraint, where each network node maintains a limited source buffer of size Bs to store its locally generated packets and also a limited shared relay buffer of size Br to store relay packets for other nodes. Based on the Queuing theory and birthdeath chain theory, we first develop a general theoretical framework to fully depict the sourcerelay buffer occupancy process in such a MANET, which applies to any distributed MAC protocol and any mobility model that leads to the uniform distribution of nodes' locations in steady state. With the help of this framework, we then derive the exact expressions of several key network performance metrics, including achievable throughput, throughput capacity, and expected endtoend delay. We further conduct case studies under two network scenarios and provide the corresponding theoreticalsimulation results to demonstrate the application as well as the efficiency of our theoretical framework. Finally, we present extensive numerical results to illustrate the impacts of buffer constraint on the performance of a bufferlimited MANET.
Robust nonparametric nearest neighbor random process clustering ; We consider the problem of clustering noisy finitelength observations of stationary ergodic random processes according to their generative models without prior knowledge of the model statistics and the number of generative models. Two algorithms, both using the L1distance between estimated power spectral densities PSDs as a measure of dissimilarity, are analyzed. The first one, termed nearest neighbor process clustering NNPC, relies on partitioning the nearest neighbor graph of the observations via spectral clustering. The second algorithm, simply referred to as kmeans KM, consists of a single kmeans iteration with farthest point initialization and was considered before in the literature, albeit with a different dissimilarity measure. We prove that both algorithms succeed with high probability in the presence of noise and missing entries, and even when the generative process PSDs overlap significantly, all provided that the observation length is sufficiently large. Our results quantify the tradeoff between the overlap of the generative process PSDs, the observation length, the fraction of missing entries, and the noise variance. Finally, we provide extensive numerical results for synthetic and real data and find that NNPC outperforms stateoftheart algorithms in human motion sequence clustering.
Learning to Generate Posters of Scientific Papers by Probabilistic Graphical Models ; Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a datadriven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJUFudan PaperPoster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Stopping GAN Violence Generative Unadversarial Networks ; While the costs of human violence have attracted a great deal of attention from the research community, the effects of the networkonnetwork NoN violence popularised by Generative Adversarial Networks have yet to be addressed. In this work, we quantify the financial, social, spiritual, cultural, grammatical and dermatological impact of this aggression and address the issue by proposing a more peaceful approach which we term Generative Unadversarial Networks GUNs. Under this framework, we simultaneously train two models a generator G that does its best to capture whichever data distribution it feels it can manage, and a motivator M that helps G to achieve its dream. Fighting is strictly verboten and both models evolve by learning to respect their differences. The framework is both theoretically and electrically grounded in game theory, and can be viewed as a winnersharesall twoplayer game in which both players work as a team to achieve the best score. Experiments show that by working in harmony, the proposed model is able to claim both the moral and loglikelihood high ground. Our work builds on a rich history of carefully argued positionpapers, published as anonymous YouTube comments, which prove that the optimal solution to NoN violence is more GUNs.
Deep generativecontrastive networks for facial expression recognition ; As the expressive depth of an emotional face differs with individuals or expressions, recognizing an expression using a single facial image at a moment is difficult. A relative expression of a query face compared to a reference face might alleviate this difficulty. In this paper, we propose to utilize contrastive representation that embeds a distinctive expressive factor for a discriminative purpose. The contrastive representation is calculated at the embedding layer of deep networks by comparing a given query image with the reference image. We attempt to utilize a generative reference image that is estimated based on the given image. Consequently, we deploy deep neural networks that embed a combination of a generative model, a contrastive model, and a discriminative model with an endtoend training manner. In our proposed networks, we attempt to disentangle a facial expressive factor in two steps including learning of a generator network and a contrastive encoder network. We conducted extensive experiments on publicly available face expression databases CK, MMI, OuluCASIA, and inthewild databases that have been widely adopted in the recent literatures. The proposed method outperforms the known stateofthe art methods in terms of the recognition accuracy.
Interferometric Constraints on Quantum Geometrical Shear Noise Correlations ; Final measurements and analysis are reported from the firstgeneration Holometer, the first instrument capable of measuring correlated variations in spacetime position at strain noise power spectral densities smaller than a Planck time. The apparatus consists of two colocated, but independent and isolated, 40 m powerrecycled Michelson interferometers, whose outputs are crosscorrelated to 25 MHz. The data are sensitive to correlations of differential position across the apparatus over a broad band of frequencies up to and exceeding the inverse light crossing time, 7.6 MHz. By measuring with Planck precision the correlation of position variations at spacelike separations, the Holometer searches for faint, irreducible correlated position noise backgrounds predicted by some models of quantum spacetime geometry. The firstgeneration optical layout is sensitive to quantum geometrical noise correlations with shear symmetrythose that can be interpreted as a fundamental noncommutativity of spacetime position in orthogonal directions. General experimental constraints are placed on parameters of a set of models of spatial shear noise correlations, with a sensitivity that exceeds the Planckscale holographic information bound on position states by a large factor. This result significantly extends the upper limits placed on models of directional noncommutativity by currently operating gravitational wave observatories.
The Robot Routing Problem for Collecting Aggregate Stochastic Rewards ; We propose a new model for formalizing reward collection problems on graphs with dynamically generated rewards which may appear and disappear based on a stochastic model. The robot routing problem is modeled as a graph whose nodes are stochastic processes generating potential rewards over discrete time. The rewards are generated according to the stochastic process, but at each step, an existing reward disappears with a given probability. The edges in the graph encode the unitdistance paths between the rewards' locations. On visiting a node, the robot collects the accumulated reward at the node at that time, but traveling between the nodes takes time. The optimization question asks to compute an optimal or epsilonoptimal path that maximizes the expected collected rewards. We consider the finite and infinitehorizon robot routing problems. For finitehorizon, the goal is to maximize the total expected reward, while for infinite horizon we consider limitaverage objectives. We study the computational and strategy complexity of these problems, establish NPlower bounds and show that optimal strategies require memory in general. We also provide an algorithm for computing epsilonoptimal infinite paths for arbitrary epsilon 0.
Unified Treatment of Spin Torques using a Coupled Magnetisation Dynamics and ThreeDimensional Spin Current Solver ; A threedimensional spin current solver based on a generalised spin driftdiffusion description, including the spin Hall effect, is integrated with a magnetisation dynamics solver. The resulting model is shown to simultaneously reproduce the spinorbit torques generated using the spin Hall effect, spin pumping torques generated by magnetisation dynamics in multilayers, as well as the spin transfer torques acting on magnetisation regions with spatial gradients, whilst fieldlike and spinlike torques are reproduced in a spin valve geometry. Two approaches to modelling interfaces are analysed, one based on the spin mixing conductance and the other based on continuity of spin currents where the spin dephasing length governs the absorption of transverse spin components. In both cases analytical formulas are derived for the spinorbit torques in a heavy metal ferromagnet bilayer geometry, showing in general both fieldlike and dampinglike torques are generated. The limitations of the analytical approach are discussed, showing that even in a simple bilayer geometry, due to the nonuniformity of the spin currents, a full threedimensional treatment is required. Finally the model is applied to the quantitative analysis of the spin Hall angle in Pt by reproducing published experimental data on the ferromagnetic resonance linewidth in the bilayer geometry.
Cosmological Asymptotics in HigherOrder Gravity Theories ; We study the earlytime behavior of isotropic and homogeneous solutions in vacuum as well as radiationfilled cosmological models in the full, effective, four dimensional gravity theory with higher derivatives. We use asymptotic methods to analyze all possible ways of approach to the initial singularity of such universes. In order to do so, we construct autonomous dynamical systems that describe the evolution of these models, and decompose the associated vector fields. We prove that, at early times, all flat vacua as well as general curved ones are globally attracted by the universal square root scaling solution. Open vacua, on the other hand show in both, future and past directions a dominant asymptotic approach to horizonfree, Milne states that emerge from initial data sets of smaller dimension. Closed universes exhibit more complex logarithmic singularities. Our results on asymptotic stability show a possible relation to cyclic and ekpyrotic cosmologies at the passage through the singularity. In the case of radiationfilled universes of the same class we show the essential uniqueness and stability of the resulting asymptotic scheme, once more dominated by t12, in all cases except perhaps that of the conformally invariant BachWeyl gravity. In all cases, we construct a formal series representation valid near the initial singularity of the general solution of these models and prove that curvature as well as radiation play a subdominant role in the dominating form. A discussion is also made on the implications of these results for the generic initial state of the theory.
Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations ; The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space RKHS, showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.
Meta learning Framework for Automated Driving ; The success of automated driving deployment is highly depending on the ability to develop an efficient and safe driving policy. The problem is well formulated under the framework of optimal control as a cost optimization problem. Model based solutions using traditional planning are efficient, but require the knowledge of the environment model. On the other hand, model free solutions suffer sample inefficiency and require too many interactions with the environment, which is infeasible in practice. Methods under the Reinforcement Learning framework usually require the notion of a reward function, which is not available in the real world. Imitation learning helps in improving sample efficiency by introducing prior knowledge obtained from the demonstrated behavior, on the risk of exact behavior cloning without generalizing to unseen environments. In this paper we propose a Meta learning framework, based on data set aggregation, to improve generalization of imitation learning algorithms. Under the proposed framework, we propose MetaDAgger, a novel algorithm which tackles the generalization issues in traditional imitation learning. We use The Open Race Car Simulator TORCS to test our algorithm. Results on unseen test tracks show significant improvement over traditional imitation learning algorithms, improving the learning time and sample efficiency in the same time. The results are also supported by visualization of the learnt features to prove generalization of the captured details.
Transmission line parameter identification using PMU measurements ; Accurate knowledge of transmission line TL impedance parameters helps to improve accuracy in relay settings and power flow modeling. To improve TL parameter estimates, various algorithms have been proposed in the past to identify TL parameters based on measurements from Phasor Measurement Units PMUs. These methods are based on the positive sequence TL models and can generate accurate positive sequence impedance parameters for a fullytransposed TL when measurement noise is absent; however these methods may generate erroneous parameters when the TLs are not fully transposed or when measurement noise is present. PMU fieldmeasure data are often corrupted with noise and this noise is problematic for all parameter identification algorithms, particularly so when applied to short transmission lines. This paper analyzes the limitations of the positive sequence TL model when used for parameter estimation of TLs that are untransposed and proposes a novel method using linear estimation theory to identify TL parameters more reliably. This method can be used for the most general case short or long lines that are fully transposed or untransposed and have balanced or unbalance loads. Besides the positive or negative sequence impedance parameters, the proposed method can also be used to estimate the zero sequence parameters and the mutual impedances between different sequences. This paper also examines the influence of noise in the PMU data on the calculation of TL parameters. Several case studies are conducted based on simulated data from ATP to validate the effectiveness of the new method. Through comparison of the results generated by this novel method and several other methods, the effectiveness of the proposed approach is demonstrated.
A cortical sparse distributed coding model linking mini and macrocolumnscale functionality ; No generic function for the minicolumn, i.e., one that would apply equally well to all cortical areas and species, has yet been proposed. I propose that the minicolumn does have a generic functionality, which only becomes clear when seen in the context of the function of the higherlevel, subsuming unit, the macrocolumn. I propose that a a macrocolumn's function is to store sparse distributed representations of its inputs and to be a recognizer of those inputs; and b the generic function of the minicolumn is to enforce macrocolumnar code sparseness. The minicolumn, defined here as a physically localized pool of 20 L23 pyramidals, does this by acting as a winnertakeall WTA competitive module, implying that macrocolumnar codes consist of 70 active L23 cells, assuming 70 minicolumns per macrocolumn. I describe an algorithm for activating these codes during both learning and retrievals, which causes more similar inputs to map to more highly intersecting codes, a property which yields ultrafast immediate, firstshot storage and retrieval. The algorithm achieves this by adding an amount of randomness noise into the code selection process, which is inversely proportional to an input's familiarity. I propose a possible mapping of the algorithm onto cortical circuitry, and adduce evidence for a neuromodulatory implementation of this familiaritycontingent noise mechanism. The model is distinguished from other recent columnar cortical circuit models in proposing a generic minicolumnar function in which a group of cells within the minicolumn, the L23 pyramidals, compete WTA to be part of the sparse distributed macrocolumnar code.
Resilient Energy Allocation Model for Supply Shortage Outages ; Supply Shortage Outages are a major concern during peak demand for developing countries. In the Philippines, commercial loads have unused backup generation of up to 3000 MW, at the same time there are shortages of as much as 700 MW during peak demand. This gives utilities the incentive to implement Demand Response programs to minimize this shortage. But when considering Demand Response from a modeling perspective, social welfare through profit is always the major objective for program implementation. That isn't always the case during an emergency situation as there can be a tradeoff between grid resilience and cost of electricity. The question is how the Distribution Utility DU shall optimally allocate the unused generation to meet the shortage when this tradeoff exists. We formulate a combined multiobjective optimal dispatch model where we can make a direct comparison between the leastcost and resilience objectives. We find that this tradeoff is due to the monotonically increasing nature of energy cost functions. If the supply is larger than the demand, the DU can perform a leastcost approach in the optimal dispatch since maximizing the energy generated in this case can lead to multiple solutions. We also find in our simulation that in cases where the supply of energy from the customers is less than shortage quantity, the DU must prioritize maximizing the generated energy rather than minimizing cost.
An integrated quasiMonte Carlo method for handling high dimensional problems with discontinuities in financial engineering ; QuasiMonte Carlo QMC method is a useful numerical tool for pricing and hedging of complex financial derivatives. These problems are usually of high dimensionality and discontinuities. The two factors may significantly deteriorate the performance of the QMC method. This paper develops an integrated method that overcomes the challenges of the high dimensionality and discontinuities concurrently. For this purpose, a smoothing method is proposed to remove the discontinuities for some typical functions arising from financial engineering. To make the smoothing method applicable for more general functions, a new path generation method is designed for simulating the paths of the underlying assets such that the resulting function has the required form. The new path generation method has an additional power to reduce the effective dimension of the target function. Our proposed method caters for a large variety of model specifications, including the BlackScholes, exponential normal inverse Gaussian L'evy, and Heston models. Numerical experiments dealing with these models show that in the QMC setting the proposed smoothing method in combination with the new path generation method can lead to a dramatic variance reduction for pricing exotic options with discontinuous payoffs and for calculating options' Greeks. The investigation on the effective dimension and the related characteristics explains the significant enhancement of the combined procedure.
Intrinsic timing jitter and latency in superconducting single photon nanowire detectors ; We analyze the origin of the intrinsic timing jitter in superconducting nanowire single photon detectors SNSPDs in terms of fluctuations in the latency of the detector response, which is determined by the microscopic physics of the photon detection process. We demonstrate that fluctuations in the physical parameters which determine the latency give rise to the intrinsic timing jitter. We develop a general description of latency by introducing the explicit time dependence of the internal detection efficiency. By considering the dynamic Fano fluctuations together with static spatial inhomogeneities, we study the details of the connection between latency and timing jitter. We develop both a simple phenomenological model and a more general microscopic model of detector latency and timing jitter based on the solution of the generalized timedependent GinzburgLandau equations for the 1D hotbelt geometry. While the analytical model is sufficient for qualitative interpretation of recent data, the general approach establishes the framework for a quantitative analysis of detector latency and the fundamental limits of intrinsic timing jitter. These theoretical advances can be used to interpret the results of recent experiments measuring the dependence of detection latency and timing jitter on photon energy to the fewpicosecond level.
Boosting Noise Robustness of Acoustic Model via Deep Adversarial Training ; In realistic environments, speech is usually interfered by various noise and reverberation, which dramatically degrades the performance of automatic speech recognition ASR systems. To alleviate this issue, the commonest way is to use a welldesigned speech enhancement approach as the frontend of ASR. However, more complex pipelines, more computations and even higher hardware costs microphone array are additionally consumed for this kind of methods. In addition, speech enhancement would result in speech distortions and mismatches to training. In this paper, we propose an adversarial training method to directly boost noise robustness of acoustic model. Specifically, a jointly compositional scheme of generative adversarial net GAN and neural networkbased acoustic model AM is used in the training phase. GAN is used to generate clean feature representations from noisy features by the guidance of a discriminator that tries to distinguish between the true clean signals and generated signals. The joint optimization of generator, discriminator and AM concentrates the strengths of both GAN and AM for speech recognition. Systematic experiments on CHiME4 show that the proposed method significantly improves the noise robustness of AM and achieves the average relative error rate reduction of 23.38 and 11.54 on the development and test set, respectively.
Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis ; Program synthesis is the task of automatically generating a program consistent with a specification. Recent years have seen proposal of a number of neural approaches for program synthesis, many of which adopt a sequence generation paradigm similar to neural machine translation, in which sequencetosequence models are trained to maximize the likelihood of known reference programs. While achieving impressive results, this strategy has two key limitations. First, it ignores Program Aliasing the fact that many different programs may satisfy a given specification especially with incomplete specifications such as a few inputoutput examples. By maximizing the likelihood of only a single reference program, it penalizes many semantically correct programs, which can adversely affect the synthesizer performance. Second, this strategy overlooks the fact that programs have a strict syntax that can be efficiently checked. To address the first limitation, we perform reinforcement learning on top of a supervised model with an objective that explicitly maximizes the likelihood of generating semantically correct programs. For addressing the second limitation, we introduce a training procedure that directly maximizes the probability of generating syntactically correct programs that fulfill the specification. We show that our contributions lead to improved accuracy of the models, especially in cases where the training data is limited.
Imagederived generative modeling of pseudomacromolecular structures towards the statistical assessment of Electron CryoTomography template matching ; Cellular Electron CryoTomography CECT is a 3D imaging technique that captures information about the structure and spatial organization of macromolecular complexes within single cells, in nearnative state and at submolecular resolution. Although template matching is often used to locate macromolecules in a CECT image, it is insufficient as it only measures the relative structural similarity. Therefore, it is preferable to assess the statistical credibility of the decision through hypothesis testing, requiring many templates derived from a diverse population of macromolecular structures. Due to the very limited number of known structures, we need a generative model to efficiently and reliably sample pseudostructures from the complex distribution of macromolecular structures. To address this challenge, we propose a novel imagederived approach for performing hypothesis testing for template matching by constructing generative models using the generative adversarial network. Finally, we conducted hypothesis testing experiments for template matching on both simulated and experimental subtomograms, allowing us to conclude the identity of subtomograms with high statistical credibility and significantly reducing false positives.
Structure in Multimode Squeezing A Generalised BlochMessiah Reduction ; Methods to decompose nonlinear optical transformation vary from setting to setting, leading to apparent differences in the treatments used to model photon pair sources, compared to those used to model degenerate downconversion processes. The BlochMessiah reduction of Gaussian processes to singlemode squeezers and passive linear unitaries appears juxtaposed against the practicalities of the Schmidtdecomposition for photon pair sources into twomode squeezers and passive unitaries. Here, we present a general framework which unifies these forms as well as elucidating more general structure in multimode Gaussian transformations. The decomposition is achieved by introducing additional constraints into the BlochMessiah reduction used to diagonalise Gaussian processes, these constraints motivated by physical constraints following from the inequivalence of different physical degrees of freedom in a system, ie. the temporalspectral degrees of freedom vs different spatial modes in a transformation. The result is the emergence of the twomode squeezing picture from the reduction, as well as the potential to generalise these constraints to accommodate spectral imperfections in a source generating 3mode continuous variable GHZlike states. Furthermore, we consider the practical scenario in which a transformation aims to generate a multiphoton entangled state, whereby spatial modes provide desirable degrees of freedom, whilst undesired spectral mode structure contributes noise, and show that this spectral impurity can be efficiently modeled by finding an optimal low dimensional bases for its simulation.
StructurePreserving Transformation Generating Diverse and Transferable Adversarial Examples ; Adversarial examples are perturbed inputs designed to fool machine learning models. Most recent works on adversarial examples for image classification focus on directly modifying pixels with minor perturbations. A common requirement in all these works is that the malicious perturbations should be small enough measured by an Lp norm for some p so that they are imperceptible to humans. However, small perturbations can be unnecessarily restrictive and limit the diversity of adversarial examples generated. Further, an Lp norm based distance metric ignores important structure patterns hidden in images that are important to human perception. Consequently, even the minor perturbation introduced in recent works often makes the adversarial examples less natural to humans. More importantly, they often do not transfer well and are therefore less effective when attacking blackbox models especially for those protected by a defense mechanism. In this paper, we propose a structurepreserving transformation SPT for generating natural and diverse adversarial examples with extremely high transferability. The key idea of our approach is to allow perceptible deviation in adversarial examples while keeping structure patterns that are central to a human classifier. Empirical results on the MNIST and the fashionMNIST datasets show that adversarial examples generated by our approach can easily bypass strong adversarial training. Further, they transfer well to other target models with no loss or little loss of successful attack rate.
General form of the renormalized, perturbed energy density via interacting quantum fields in cosmological spacetimes ; A covariant description of quantum matter fields in the early universe underpins models for the origin of species, e.g. baryogenesis and dark matter production. In nearly all cases the relevant cosmological observables are computed in a general approximation, via the standard irreducible representations found in the operator formalism of particle physics, where intricacies related to a renormalized stressenergy tensor in a nonstationary spacetime are ignored. Models of the early universe also include a dense environment of quantum fields where farfromequilibrium interactions manifest expressions for observables with substantive corrections to the leading terms. An alternate treatment of these cosmological observables may be carried out within the framework of algebraic quantum field theory in curved spacetime, where the field theoretic model of quantum matter is compatible with the classical effects of general relativity. Here, we take the first step towards computing such an observable. We employ the algebraic formalism while considering farfromequilibrium interactions in a dense environment under the influence of a classical, yet nonstationary, spacetime to derive an expression for the perturbed energy density as a component of the renormalized stressenergy tensor associated with common proposals for quantum matter production in the early universe.
Subadditivity Beyond Trees and the ChiSquared Mutual Information ; In 2000, Evans et al. Eva00 proved the subadditivity of the mutual information in the broadcasting on tree model with binary vertex labels and symmetric channels. They raised the question of whether such subadditivity extends to loopy graphs in some appropriate way. We recently proposed such an extension that applies to general graphs and binary vertex labels AB18, using synchronization models and relying on percolation bounds. This extension requires however the edge channels to be symmetric on the product of the adjacent spins. A more general version of such a percolation bound that applies to asymmetric channels is also obtained in PW18, relying on the SDPI, but the subadditivity property does not follow with such generalizations. In this note, we provide a new result showing that the subadditivity property still holds for arbitrary asymmetric channels acting on the product of spins, when the graphs are restricted to be seriesparallel. The proof relies on the use of the Chisquared mutual information rather than the classical mutual information, and various properties of the former are discussed. We also present a generalization of the broadcasting on tree model the synchronization on tree where the bound from PW18 relying on the SPDI can be significantly looser than the bound resulting from the Chisquared subadditivity property presented here.
Simple Approximations of the SIR Meta Distribution in General Cellular Networks ; Compared to the standard success coverage probability, the meta distribution of the signaltointerference ratio SIR provides much more finegrained information about the network performance. We consider general heterogeneous cellular networks HCNs with base station tiers modeled by arbitrary stationary and ergodic nonPoisson point processes. The exact analysis of nonPoisson network models is notoriously difficult, even in terms of the standard success probability, let alone the meta distribution. Hence we propose a simple approach to approximate the SIR meta distribution for nonPoisson networks based on the ASAPPP approximate SIR analysis based on the Poisson point process method. We prove that the asymptotic horizontal gap G0 between its standard success probability and that for the Poisson point process exactly characterizes the gap between the bth moment of the conditional success probability, as the SIR threshold goes to 0. The gap G0 allows two simple approximations of the meta distribution for general HCNs 1 the pertier approximation by applying the shift G0 to each tier and 2 the effective gain approximation by directly shifting the meta distribution for the homogeneous independent Poisson network. Given the generality of the model considered and the finegrained nature of the meta distribution, these approximations work surprisingly well.
Tropical and Extratropical General Circulation with a Meridional Reversed Temperature Gradient as Expected in a High Obliquity Planet ; Planets with high obliquity receive more radiation in the polar regions than at low latitudes, and thus, assuming an oceancovered surface with sufficiently high heat capacity, their meridional temperature gradient was shown to be reversed for the entire year. The objective of this work is to investigate the drastically different general circulation of such planets, with an emphasis on the tropical Hadley circulation and the midlatitude baroclinic eddy structure. We use a 3D dry dynamic core model, accompanied by an eddyfree configuration and a generalized 2D Eady model. When the meridional temperature gradient Ty is reversed, the Hadley cell remains in the same direction, because the surface wind pattern and hence the associated meridional Ekman transport are not changed, as required by the baroclinic eddy momentum transport. The Hadley cell under reversed Ty also becomes much shallower and weaker, even when the magnitude of the gradient is the same as in the normal case. The shallowness is due to the bottomheavy structure of the baroclinic eddies in the reverse case, and the weakness is due to the weak wave activity. We propose a new mechanism to explain the midlatitude eddy structure for both cases, and verify it using the generalized Eady model. With seasonal variations included, the annual mean circulation resembles that under perpetual annual mean setup. Approaching the solstices, a strong crossequator Hadley cell forms in both cases, and about 23 of the Hadley circulation is driven by eddies, as shown by eddyfree simulations and using a decomposition of the Hadley cell.
Multimodal 3D Object Detection from Simulated Pretraining ; The need for simulated data in autonomous driving applications has become increasingly important, both for validation of pretrained models and for training new models. In order for these models to generalize to realworld applications, it is critical that the underlying dataset contains a variety of driving scenarios and that simulated sensor readings closely mimics realworld sensors. We present the Carla Automated Dataset Extraction Tool CADET, a novel tool for generating training data from the CARLA simulator to be used in autonomous driving research. The tool is able to export highquality, synchronized LIDAR and camera data with object annotations, and offers configuration to accurately reflect a reallife sensor array. Furthermore, we use this tool to generate a dataset consisting of 10 000 samples and use this dataset in order to train the 3D object detection network AVODFPN, with finetuning on the KITTI dataset in order to evaluate the potential for effective pretraining. We also present two novel LIDAR feature map configurations in Bird's Eye View for use with AVODFPN that can be easily modified. These configurations are tested on the KITTI and CADET datasets in order to evaluate their performance as well as the usability of the simulated dataset for pretraining. Although insufficient to fully replace the use of real world data, and generally not able to exceed the performance of systems fully trained on real data, our results indicate that simulated data can considerably reduce the amount of training on real data required to achieve satisfactory levels of accuracy.
Standard model Higgs field and hidden sector cosmology ; We consider scenarios where the inflaton field decays dominantly to a hidden dark matter DM sector. By studying the typical behavior of the Standard Model SM Higgs field during inflation, we derive a relation between the primordial tensortoscalar ratio r and amplitude of the residual DM isocurvature perturbations beta which is typically generated if the DM is thermally decoupled from the SM sector. We consider different expansion histories and find that if the Universe was radiation or matterdominated after inflation, a future discovery of primordial DM isocurvature will rule out all simple scenarios of this type because generating observable beta from the Higgs is not possible without violating the bounds on r. Seen another way, the Higgs field is generically not a threat to models where both the inflaton and DM reside in a decoupled sector. However, this is not necessarily the case for an early kinationdominated epoch, as then the Higgs can source sizeable beta. We also discuss why the Higgs cannot source the observed curvature perturbation at large scales in any of the above cases but how the field can still be the dominant source of curvature perturbations at small scales.
Attend to the beginning A study on using bidirectional attention for extractive summarization ; Forum discussion data differ in both structure and properties from generic form of textual data such as news. Henceforth, summarization techniques should, in turn, make use of such differences, and craft models that can benefit from the structural nature of discussion data. In this work, we propose attending to the beginning of a document, to improve the performance of extractive summarization models when applied to forum discussion data. Evaluations demonstrated that with the help of bidirectional attention mechanism, attending to the beginning of a document initial commentpost in a discussion thread, can introduce a consistent boost in ROUGE scores, as well as introducing a new State Of The Art SOTA ROUGE scores on the forum discussions dataset. Additionally, we explored whether this hypothesis is extendable to other generic forms of textual data. We make use of the tendency of introducing important information early in the text, by attending to the first few sentences in generic textual data. Evaluations demonstrated that attending to introductory sentences using bidirectional attention, improves the performance of extractive summarization models when even applied to more generic form of textual data.
IntraHorizon Expected Shortfall and Risk Structure in Models with Jumps ; The present article deals with intrahorizon risk in models with jumps. Our general understanding of intrahorizon risk is along the lines of the approach taken in Boudoukh, Richardson, Stanton and Whitelaw 2004, Rossello 2008, Bhattacharyya, Misra and Kodase 2009, Bakshi and Panayotov 2010, and Leippold and Vasiljevi'c 2019. In particular, we believe that quantifying market risk by strictly relying on pointintime measures cannot be deemed a satisfactory approach in general. Instead, we argue that complementing this approach by studying measures of risk that capture the magnitude of losses potentially incurred at any time of a trading horizon is necessary when dealing with many financial positions. To address this issue, we propose an intrahorizon analogue of the expected shortfall for general profit and loss processes and discuss its key properties. Our intrahorizon expected shortfall is welldefined for many popular classes of L'evy processes encountered when modeling market dynamics and constitutes a coherent measure of risk, as introduced in Cheridito, Delbaen and Kupper 2004. On the computational side, we provide a simple method to derive the intrahorizon risk inherent to popular L'evy dynamics. Our general technique relies on results for maturityrandomized firstpassage probabilities and allows for a derivation of diffusion and single jump risk contributions. These theoretical results are complemented with an empirical analysis, where popular L'evy dynamics are calibrated to SP 500 index data and an analysis of the resulting intrahorizon risk is presented.
Generating random bigraphs with preferential attachment ; The bigraph theory is a relatively young, yet formally rigorous, mathematical framework encompassing Robin Milner's previous work on process calculi, on the one hand, and provides a generic metamodel for complex systems such as multiagent systems, on the other. A bigraph F langle FP, FLrangle is a superposition of two independent graph structures comprising a place graph FP i.e., a forest and a link graph FL i.e., a hypergraph, sharing the same node set, to express locality and communication of processes independently from each other. In this paper, we take some preparatory steps towards an algorithm for generating random bigraphs with preferential attachment feature w.r.t. FP and assortative disassortative linkage pattern w.r.t. FL. We employ parameters allowing one to finetune the characteristics of the generated bigraph structures. To study the pattern formation properties of our algorithmic model, we analyze several metrics from graph theory based on artificially created bigraphs under different configurations. Bigraphs provide a quite useful and expressive semantic for process calculi for mobile and global ubiquitous computing. So far, this subject has not received attention in the bigraphrelated scientific literature. However, artificial models may be particularly useful for simulation and evaluation of realworld applications in ubiquitous systems necessitating random structures.
Violation of generalized fluctuation theorems in adaptively driven steady states Applications to hair cell oscillations ; The spontaneously oscillating hair bundle of sensory cells in the inner ear is an example of a stochastic, nonlinear oscillator driven by internal active processes. Moreover, this internal activity is adaptive its power input depends on the current state of the system. We study fluctuation dissipation relations in such adaptivelydriven, nonequilibrium limitcycle oscillators. We observe the expected violation of the wellknown, equilibrium fluctuationdissipation theorem FDT, and verify the existence of a generalized fluctuationdissipation theorem GFDT in the nonadaptively driven model of the hair cell oscillator. This generalized fluctuation theorem requires the system to be analyzed in the comoving frame associated with the mean limit cycle of the stochastic oscillator. We then demonstrate, via numerical simulations and analytic calculations, that the adaptivelydriven dynamical hair cell model violates both the FDT and the GFDT. We go on to show, using stochastic, finitestate, dynamical models, that such a feedbackcontrolled drive in stochastic limit cycle oscillators generically violates both the FDT and GFDT. We propose that one may in fact use the breakdown of the GFDT as a tool to more broadly look for and quantify the effect of adaptive, feedback mechanisms associated with driven nonequilibrium biological dynamics.
Data Smashing ; Investigation of the underlying physics or biology from empirical data requires a quantifiable notion of similarity when do two observed data sets indicate nearly identical generating processes, and when they do not. The discriminating characteristics to look for in data is often determined by heuristics designed by experts, e.g., distinct shapes of folded lightcurves may be used as features to classify variable stars, while determination of pathological brain states might require a Fourier analysis of brainwave activity. Finding good features is nontrivial. Here, we propose a universal solution to this problem we delineate a principle for quantifying similarity between sources of arbitrary data streams, without a priori knowledge, features or training. We uncover an algebraic structure on a space of symbolic models for quantized data, and show that such stochastic generators may be added and uniquely inverted; and that a model and its inverse always sum to the generator of flat white noise. Therefore, every data stream has an antistream data generated by the inverse model. Similarity between two streams, then, is the degree to which one, when summed to the other's antistream, mutually annihilates all statistical structure to noise. We call this data smashing. We present diverse applications, including disambiguation of brainwaves pertaining to epileptic seizures, detection of anomalous cardiac rhythms, and classification of astronomical objects from raw photometry. In our examples, the data smashing principle, without access to any domain knowledge, meets or exceeds the performance of specialized algorithms tuned by domain experts.
Dynamics of stellar wind in a Roche potential implications for i outflows periodicities relevant to astronomical masers, and ii generation of baroclinicity ; We study the dynamics of stellar wind from one of the bodies in the binary system, where the other body interacts only gravitationally. We focus on following three issues i we explore the origin of observed periodic variations in maser intensity; ii we address the nature of bipolar molecular outflows; and iii we show generation of baroclinicity in the same model setup. From direct numerical simulations and further numerical modelling, we find that the maser intensity along a given line of sight varies periodically due to periodic modulation of material density. This modulation period is of the order of the binary period. Another feature of this model is that the velocity structure of the flow remains unchanged with time in late stages of wind evolution. Therefore the location of the masing spot along the chosen sightline stays at the same spatial location, thus naturally explaining the observational fact. This also gives an appearance of bipolar nature in the standard positionvelocity diagram, as has been observed in a number of molecular outflows. Remarkably, we also find the generation of baroclinicity in the flow around binary system, offering another site where the seed magnetic fields could possibly be generated due to the Biermann battery mechanisms, within galaxies.
Twisted spectral geometry for the standard model ; The Higgs field is a connection oneform as the other bosonic fields, provided one describes space no more as a manifold M but as a slightly noncommutative generalization of it. This is well encoded within the theory of spectral triples all the bosonic fields of the standard model including the Higgs are obtained on the same footing, as fluctuations of a generalized Dirac operator by a matrixvalue algebra of functions on M. In the commutative case, fluctuations of the usual free Dirac operator by the complexvalue algebra A of smooth functions on M vanish, and so do not generate any bosonic field. We show that imposing a twist in the sense of ConnesMoscovici forces to double the algebra A, but does not require to modify the space of spinors on which it acts. This opens the way to twisted fluctuations of the free Dirac operator, that yield a perturbation of the spin connection. Applied to the standard model, a similar twist yields in addition the extra scalar field needed to stabilize the electroweak vacuum, and to make the computation of the Higgs mass in noncommutative geometry compatible with its experimental value.
Magneticallyinduced outflows from binary neutron star merger remnants ; Recent observations by the Swift satellite have revealed longlasting sim 102105,mathrms, plateaulike Xray afterglows in the vast majority of short gammaray bursts events. This has put forward the idea of a longlived millisecond magnetar central engine being generated in a binary neutron star BNS merger and being responsible for the sustained energy injection over these timescales magnetar model. We elaborate here on recent simulations that investigate the early evolution of such a merger remnant in generalrelativistic magnetohydrodynamics. These simulations reveal very different conditions than those usually assumed for dipole spindown emission in the magnetar model. In particular, the surrounding of the newly formed NS is polluted by baryons due to a dense, highly magnetized and isotropic wind from the stellar surface that is induced by magnetic field amplification in the interior of the star. The timescales and luminosities of this wind are compatible with early Xray afterglows, such as the extended emission. These isotropic winds are a generic feature of BNS merger remnants and thus represent an attractive alternative to current models of early Xray afterglows. Further implications to BNS mergers and short gammaray bursts are discussed.
Towards the Natural Gauge Mediation ; The sweet spot supersymmetry SUSY solves the mu problem in the Minimal Supersymmetric Standard Model MSSM with gauge mediated SUSY breaking GMSB via the generalized GiudiceMasiero GM mechanism where only the muterm and soft Higgs masses are generated at the unification scale of the Grand Unified Theory GUT due to the approximate PQ symmetry. Because all the other SUSY breaking soft terms are generated via the GMSB below the GUT scale, there exists SUSY electroweak EW finetuning problem to explain the 125 GeV Higgs boson mass due to small trilinear soft term. Thus, to explain the Higgs boson mass, we propose the GMSB with both the generalized GM mechanism and Higgsmessenger interactions. The renormalization group equations are runnings from the GUT scale down to EW scale. So the EW symmetry breaking can be realized easier. We can keep the gauge coupling unification and solution to the flavor problem in the GMSB, as well as solve the muBmuproblem. Moreover, there are only five free parameters in our model. So we can determine the characteristic low energy spectra and explore its distinct phenomenology. The lowscale finetuning measure can be as low as 20 with the light stop mass below 1 TeV and gluino mass below 2 TeV. The gravitino dark matter can come from a thermal production with the correct relic density and be consistent with the thermal leptogenesis. Because gluino and stop can be relatively light in our model, how to search for such GMSB at the upcoming run II of the LHC experiment could be very interesting.
User Preferences Modeling and Learning for Pleasing Photo Collage Generation ; In this paper we consider how to automatically create pleasing photo collages created by placing a set of images on a limited canvas area. The task is formulated as an optimization problem. Differently from existing stateoftheart approaches, we here exploit subjective experiments to model and learn pleasantness from user preferences. To this end, we design an experimental framework for the identification of the criteria that need to be taken into account to generate a pleasing photo collage. Five different thematic photo datasets are used to create collages using stateoftheart criteria. A first subjective experiment where several subjects evaluated the collages, emphasizes that different criteria are involved in the subjective definition of pleasantness. We then identify new global and local criteria and design algorithms to quantify them. The relative importance of these criteria are automatically learned by exploiting the user preferences, and new collages are generated. To validate our framework, we performed several psychovisual experiments involving different users. The results shows that the proposed framework allows to learn a novel computational model which effectively encodes an interuser definition of pleasantness. The learned definition of pleasantness generalizes well to new photo datasets of different themes and sizes not used in the learning. Moreover, compared with two state of the art approaches, the collages created using our framework are preferred by the majority of the users.
Large Noise in Variational Regularization ; In this paper we consider variational regularization methods for inverse problems with large noise that is in general unbounded in the image space of the forward operator. We introduce a Banach space setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operators. A key observation, which guides us through the subsequent analysis, is that such a general noise model can be understood with the same setting as approximate source conditions while a standard model of bounded noise is related directly to classical source conditions. Based on this insight we obtain a quite general existence result for regularized variational problems and derive error estimates in terms of Bregman distances. The latter are specialized for the particularly important cases of one and phomogeneous regularization functionals. As a natural further step we study stochastic noise models and in particular white noise, for which we derive error estimates in terms of the expectation of the Bregman distance. The finiteness of certain expectations leads to a novel class of abstract smoothness conditions on the forward operator, which can be easily interpreted in the Hilbert space case. We finally exemplify the approach and in particular the conditions for popular examples of regularization functionals given by squared norm, Besov norm and total variation, respectively.
Randomness Extraction in AC0 and with Small Locality ; Randomness extractors, which extract high quality almostuniform random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P. In this paper we study randomness extractors in the following two models of computation 1 constantdepth circuits AC0, and 2 the local computation model. Previous work in these models, such as Vio05a, GVW15 and BG13, only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. As an application, we use our AC0 extractors to study pseudorandom generators in AC0, and show that we can construct both cryptographic pseudorandom generators under reasonable computational assumptions and unconditional pseudorandom generators for space bounded computation with very good parameters. Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include 1 a general way to reduce the error of strong seeded extractors while preserving the AC0 property and small locality, and 2 a seeded randomness condenser with small locality.
CyberPhysical Systems Security A Survey ; With the exponential growth of cyberphysical systems CPS, new security challenges have emerged. Various vulnerabilities, threats, attacks, and controls have been introduced for the new generation of CPS. However, there lack a systematic study of CPS security issues. In particular, the heterogeneity of CPS components and the diversity of CPS systems have made it very difficult to study the problem with one generalized model. In this paper, we capture and systematize existing research on CPS security under a unified framework. The framework consists of three orthogonal coordinates 1 from the emphsecurity perspective, we follow the wellknown taxonomy of threats, vulnerabilities, attacks and controls; 2from the emphCPS components perspective, we focus on cyber, physical, and cyberphysical components; and 3 from the emphCPS systems perspective, we explore general CPS features as well as representative systems e.g., smart grids, medical CPS and smart cars. The model can be both abstract to show general interactions of a CPS application and specific to capture any details when needed. By doing so, we aim to build a model that is abstract enough to be applicable to various heterogeneous CPS applications; and to gain a modular view of the tightly coupled CPS components. Such abstract decoupling makes it possible to gain a systematic understanding of CPS security, and to highlight the potential sources of attacks and ways of protection.
Developments in Topological Gravity ; This note aims to provide an entr'ee to two developments in twodimensional topological gravity that is, intersection theory on the moduli space of Riemann surfaces that have not yet become wellknown among physicists. A little over a decade ago, Mirzakhani discovered citeM1,M2 an elegant new proof of the formulas that result from the relationship between topological gravity and matrix models of twodimensional gravity. Here we will give a very partial introduction to that work, which hopefully will also serve as a modest tribute to the memory of a brilliant mathematical pioneer. More recently, Pandharipande, Solomon, and Tessler citePST with further developments in citeTes,BT,STa generalized intersection theory on moduli space to the case of Riemann surfaces with boundary, leading to generalizations of the familiar KdV and Virasoro formulas. Though the existence of such a generalization appears natural from the matrix model viewpoint it corresponds to adding vector degrees of freedom to the matrix model constructing this generalization is not straightforward. We will give some idea of the unexpected way that the difficulties were resolved.
Generalized Conformal Transformation and Inflationary Attractors ; We investigate the inflationary attractors in models of inflation inspired from general conformal transformation of general scalartensor theories to the Einstein frame. The coefficient of the conformal transformation in our study depends on both the scalar field and its kinetic term. Therefore the relevant scalartensor theories display the subset of the class I of the degenerate higherorder scalartensor theories in which both the scalar field and its kinetic term can nonminimally couple to gravity. We find that if the conformal coefficient Omega takes a multiplicative form such that Omega equiv wphiWX where X is the kinetic term of the field phi, the theoretical predictions of the proposed model can have usual universal attractor independent of any functions of WX. For the case where Omega takes an additive form, such that Omega equiv wphi kphi XiX, we find that there are new xi attractors in addition to the universal ones. We analyze the inflationary observables of these models and compare them to the latest constraints from the Planck collaboration. We find that the observable quantities associated to these new xi attractors do not satisfy the constraints from Planck data at a strong coupling limit.
Machine Learning with Abstention for Automated Liver Disease Diagnosis ; This paper presents a novel approach for detection of liver abnormalities in an automated manner using ultrasound images. For this purpose, we have implemented a machine learning model that can not only generate labels normal and abnormal for a given ultrasound image but it can also detect when its prediction is likely to be incorrect. The proposed model abstains from generating the label of a test example if it is not confident about its prediction. Such behavior is commonly practiced by medical doctors who, when given insufficient information or a difficult case, can chose to carry out further clinical or diagnostic tests before generating a diagnosis. However, existing machine learning models are designed in a way to always generate a label for a given example even when the confidence of their prediction is low. We have proposed a novel stochastic gradient based solver for the learning with abstention paradigm and use it to make a practical, state of the art method for liver disease classification. The proposed method has been benchmarked on a data set of approximately 100 patients from MINAR, Multan, Pakistan and our results show that the proposed scheme offers state of the art classification performance.
The binary black hole explorer onthefly visualizations of precessing binary black holes ; Binary black hole mergers are of great interest to the astrophysics community, not least because of their promise to test general relativity in the highly dynamic, strong field regime. Detections of gravitational waves from these sources by LIGO and Virgo have garnered widespread media and public attention. Among these sources, precessing systems with misaligned blackhole spinorbital angular momentum are of particular interest because of the rich dynamics they offer. However, these systems are, in turn, more complex compared to nonprecessing systems, making them harder to model or develop intuition about. Visualizations of numerical simulations of precessing systems provide a means to understand and gain insights about these systems. However, since these simulations are very expensive, they can only be performed at a small number of points in parameter space. We present binaryBHexp, a tool that makes use of surrogate models of numerical simulations to generate onthefly interactive visualizations of precessing binary black holes. These visualizations can be generated in a few seconds, and at any point in the 7dimensional parameter space of the underlying surrogate models. With illustrative examples, we demonstrate how this tool can be used to learn about precessing binary black hole systems.
Constraints on massive vector dark energy models from integrated SachsWolfegalaxy crosscorrelations ; The gravitationalwave event GW170817, together with the electromagnetic counterpart, shows that the speed of tensor perturbations cT on the cosmological background is very close to that of light c for the redshift z0.009. In generalized Proca theories, the Lagrangians compatible with the condition cTc are constrained to be derivative interactions up to cubic order, besides those corresponding to intrinsic vector modes. We place observational constraints on a dark energy model in cubicorder generalized Proca theories with intrinsic vector modes by running the Markov chain Monte Carlo MCMC code. We use the crosscorrelation data of the integrated SachsWolfe ISW signal and galaxy distributions in addition to the data sets of cosmic microwave background, baryon acoustic oscillations, type Ia supernovae, local measurements of the Hubble expansion rate, and redshiftspace distortions. We show that, unlike cubicorder scalartensor theories, the existence of intrinsic vector modes allows the possibility for evading the ISWgalaxy anticorrelation incompatible with the current observational data. As a result, we find that the dark energy model in cubicorder generalized Proca theories exhibits a better fit to the data than the cosmological constant, even by including the ISWgalaxy correlation data in the MCMC analysis.
An improved implicit sampling for Bayesian inverse problems of multiterm time fractional multiscale diffusion models ; This paper presents an improved implicit sampling method for hierarchical Bayesian inverse problems. A widely used approach for sampling posterior distribution is based on Markov chain Monte Carlo MCMC. However, the samples generated by MCMC are usually strongly correlated. This may lead to a small size of effective samples from a long Markov chain and the resultant posterior estimate may be inaccurate. An implicit sampling method proposed in 11 can generate independent samples and capture some inherent nonGaussian features of the posterior based on the weights of samples. In the implicit sampling method, the posterior samples are generated by constructing a map and distribute around the MAP point. However, the weights of implicit sampling in previous works may cause excessive concentration of samples and lead to ensemble collapse. To overcome this issue, we propose a new weight formulation and make resampling based on the new weights. In practice, some parameters in prior density are often unknown and a hierarchical Bayesian inference is necessary for posterior exploration. To this end, the hierarchical Bayesian formulation is used to estimate the MAP point and integrated in the implicit sampling framework. Compared to conventional implicit sampling, the proposed implicit sampling method can significantly improve the posterior estimator and the applicability for high dimensional inverse problems. The improved implicit sampling method is applied to the Bayesian inverse problems of multiterm time fractional diffusion models in heterogeneous media. To effectively capture the heterogeneity effect, we present a mixed generalized multiscale finite element method mixed GMsFEM to solve the time fractional diffusion models in a coarse grid, which can substantially speed up the Bayesian inversion.
Low Level Control of a Quadrotor with Deep ModelBased Reinforcement Learning ; Designing effective lowlevel robot controllers often entail platformspecific implementations that require manual heuristic parameter tuning, significant system knowledge, or long design times. With the rising number of robotic and mechatronic systems deployed across areas ranging from industrial automation to intelligent toys, the need for a general approach to generating lowlevel controllers is increasing. To address the challenge of rapidly generating lowlevel controllers, we argue for using modelbased reinforcement learning MBRL trained on relatively small amounts of automatically generated i.e., without system simulation data. In this paper, we explore the capabilities of MBRL on a Crazyflie centimeterscale quadrotor with rapid dynamics to predict and control at 50Hz. To our knowledge, this is the first use of MBRL for controlled hover of a quadrotor using only onboard sensors, direct motor input signals, and no initial dynamics knowledge. Our controller leverages rapid simulation of a neural network forward dynamics model on a GPUenabled base station, which then transmits the best current action to the quadrotor firmware via radio. In our experiments, the quadrotor achieved hovering capability of up to 6 seconds with 3 minutes of experimental training data.
BuildAFLAIR Synthetic T2FLAIR Contrast Generation through Physics Informed Deep Learning ; Purpose Magnetic resonance imaging MRI exams include multiple series with varying contrast and redundant information. For instance, T2FLAIR contrast is based upon tissue T2 decay and the presence of water, also present in T2 and diffusionweighted contrasts. T2FLAIR contrast can be hypothetically modeled through deep learning models trained with diffusion and T2weighted acquisitions. Methods Diffusion, T2, T2FLAIR, and T1weighted brain images were acquired in 15 individuals. A convolutional neural network was developed to generate a T2FLAIR image from other contrasts. Two datasets were withheld from training for validation. Results Inputs with physical relationships to T2FLAIR contrast most significantly impacted performance. The best model yielded results similar to acquired T2FLAIR images, with a structural similarity index of 0.909, and reproduced pathology excluded from training. Synthetic images qualitatively exhibited lower noise and increased smoothness compared to acquired images. Conclusion This suggests that with optimal inputs, deep learning based contrast generation performs well with creating synthetic T2FLAIR images. Feature engineering on neural network inputs, based upon the physical basis of contrast, impacts the generation of synthetic contrast images. A larger, prospective clinical study is needed.
ThermalFIST A package for heavyion collisions and hadronic equation of state ; ThermalFIST Thermal, Fast and Interactive Statistical Toolkit is a C package designed for a convenient generalpurpose physics analysis within the family of hadron resonance gas HRG models. This mainly includes the statistical analysis of particle production in heavyion collisions and the phenomenology of hadronic equation of state. Notable features include fluctuations and correlations of conserved charges, effects of probabilistic decay, chemical nonequilibrium, and inclusion of van der Waals hadronic interactions. Calculations are possible within the grand canonical ensemble, the canonical ensemble, as well as in mixedcanonical ensembles combining the canonical treatment of certain conserved charges with the grandcanonical treatment of other conserved charges. The package contains a fast thermal event generator, which generates particle yields in accordance with the HRG chemistry, and particle momenta based on the Blast Wave model. A distinct feature of this package is the presence of the graphical user interface frontend QtThermalFIST which is designed for fast and convenient generalpurpose HRG model applications.
Learning singleimage 3D reconstruction by generative modelling of shape, pose and shading ; We present a unified framework tackling two problems classspecific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, andor training with multiple views of each object instance. Our framework is very general it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2Dsupervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that i it learns to disentangle shape from pose and lighting; ii using shading in the loss improves performance compared to just silhouettes; iii when using a standard single white light, our model outperforms stateoftheart 2Dsupervised methods, both with and without pose supervision, thanks to exploiting shading cues; iv performance improves further when using multiple coloured lights, even approaching that of stateoftheart 3Dsupervised methods; v shapes produced by our model capture smooth surfaces and fine details better than voxelbased approaches; and vi our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.