text
stringlengths
62
2.94k
Prediction of noise from serrated trailingedges ; A new analytical model is developed for the prediction of noise from serrated trailingedges. The model generalizes Amiet's trailingedge noise theory to sawtooth trailingedges, resulting in an inhomogeneous partial differential equation. The equation is then solved by means of a Fourier expansion technique combined with an iterative procedure. The solution is validated through comparison with finite element method for a variety of serrations at different Mach numbers. Results obtained using the new model predict noise reduction of up to 10 dB at 90 degree above the trailingedge, which is more realistic than predictions based on Howe's model and also more consistent with experimental observations. A thorough analytical and numerical analysis of the physical mechanism is carried out and suggests that the noise reduction due to serration originates primarily from interference effects near the trailingedge. A closer inspection of the proposed mathematical model has led to the development of two criteria for the effectiveness of the trailingedge serrations, consistent but more general than those proposed by Howe. While experimental investigations often focus on noise reduction at ninety degrees above the trailingedge, the new analytical model shows that the destructive interference scattering effects due to the serrations cause significant noise reduction at large polar angles, near the leading edge. It has also been observed that serrations can significantly change the directivity characteristics of the aerofoil at high frequencies and even lead to noise increase at high Mach numbers.
BeyondQuantum Modeling of Question Order Effects and Response Replicability in Psychological Measurements ; A general tensionreduction GTR model was recently considered to derive quantum probabilities as universal averages over all possible forms of nonuniform fluctuations, and explain their considerable success in describing experimental situations also outside of the domain of physics, for instance in the ambit of quantum models of cognition and decision. Yet, this result also highlighted the possibility of observing violations of the predictions of the Born rule, in those situations where the averaging would not be large enough, or would be altered because of the combination of multiple measurements. In this article we show that this is indeed the case in typical psychological measurements exhibiting question order effects, by showing that their statistics of outcomes are inherently nonHilbertian, and require the larger framework of the GTRmodel to receive an exact mathematical description. We also consider another unsolved problem of quantum cognition response replicability. It is has been observed that when question order effects and response replicability occur together, the situation cannot be handled anymore by quantum theory. However, we show that it can be easily and naturally described in the GTRmodel. Based on these findings, we motivate the adoption in cognitive science of a hiddenmeasurements interpretation of the quantum formalism, and of its GTRmodel generalization, as the natural interpretational framework explaining the data of psychological measurements on conceptual entities.
Distributed Compressive Sensing A Deep Learning Approach ; Various studies that address the compressed sensing problem with Multiple Measurement Vectors MMVs have been recently carried. These studies assume the vectors of the different channels to be jointly sparse. In this paper, we relax this condition. Instead we assume that these sparse vectors depend on each other but that this dependency is unknown. We capture this dependency by computing the conditional probability of each entry in each vector being nonzero, given the residuals of all previous vectors. To estimate these probabilities, we propose the use of the Long ShortTerm Memory LSTM1, a data driven model for sequence modelling that is deep in time. To calculate the model parameters, we minimize a cross entropy cost function. To reconstruct the sparse vectors at the decoder, we propose a greedy solver that uses the above model to estimate the conditional probabilities. By performing extensive experiments on two real world datasets, we show that the proposed method significantly outperforms the general MMV solver the Simultaneous Orthogonal Matching Pursuit SOMP and a number of the modelbased Bayesian methods. The proposed method does not add any complexity to the general compressive sensing encoder. The trained model is used just at the decoder. As the proposed method is a data driven method, it is only applicable when training data is available. In many applications however, training data is indeed available, e.g. in recorded images and videos.
Estimation of hysteretic losses for MgB2 tapes under the operating conditions of a generator ; Hysteretic losses in MgB2 wound superconducting coils of a 500 kW synchronous hybrid generator were estimated as part of the European project SUPRAPOWER led by the Spanish company Tecnalia Research and Innovation. Particular interest was given to the losses found in tapes in the superconducting rotor caused by the magnetic flux ripples originating from the conventional stator during nominal operation. To compute the losses, a 2D Finite Element Method was applied to solve the Hformulation of Maxwell's equations considering the nonlinear properties of both the superconducting material and its surrounding Ni matrix. To be able to model all the different turns composing the winding of the superconducting rotor coils, three geometrical models of single tape cross section of decreasing complexity were studied 1 the first model reproduced closely the actual cross section obtained from micrographs, 2 the second model was obtained from the computed elastoplastic deformation of a round Ni wire, 3 the last model was based on a simplified elliptic cross section. The last geometry allowed validating the modeling technique by comparing numerical losses with results from wellestablished analytical expressions. Additionally, the following cases of filament transpositions were studied no, partial and full transposition. Finally, choosing the right level of geometrical details to predict the expected behavior of individual superconducting tapes in the rotor, the following operational regimes were studied BiasDC current, ramping current under ramping background field, and magnetic flux ripples under DC background current and field.
Open Source Codes for Computing the Critical Current of Superconducting Devices ; In order to transport sufficiently high current, hightemperature superconductor HTS tapes are assembled in cable structures of different forms. In such cables, the tapes are tightly packed and have a strong electromagnetic interaction. In particular, the generated selffield is quite substantial and can give an important contribution in reducing the maximum current the cable can effectively carry. In order to be able to predict the critical current of said cable structures, a static numerical model has been recently proposed. In this contribution, we present in detail the implementation of such models in different programming environments, including finiteelementbased and general numerical analysis programs, both commercial an opensource. A comparison of the accuracy and calculation speed of the different implementations of the model is carried out for the case of a Roebel cable. The model is also used to evaluate the importance of choosing a very accurate description of the angular JcB dependence of the superconductor as input for the material's property. The numerical codes, which are opensource, are made freely available to interested users.
A Method for Modeling Growth of Organs and Transplants Based on the General Growth Law Application to the Liver in Dogs and Humans ; Understanding biological phenomena requires a systemic approach that incorporates different mechanisms acting on different spatial and temporal scales, since in organisms the workings of all components, such as organelles, cells, and organs interrelate. This inherent interdependency between diverse biological mechanisms, both on the same and on different scales, provides the functioning of an organism capable of maintaining homeostasis and physiological stability through numerous feedback loops. Thus, developing models of organisms and their constituents should be done within the overall systemic context of the studied phenomena. We introduce such a method for modeling growth and regeneration of livers at the organ scale, considering it a part of the overall multiscale biochemical and biophysical processes of an organism. Our method is based on the earlier discovered general growth law, postulating that any biological growth process comprises a uniquely defined distribution of nutritional resources between maintenance needs and biomass production. Based on this law, we introduce a liver growth model that allows to accurately predicting the growth of liver transplants in dogs and liver grafts in humans. Using this model, we find quantitative growth characteristics, such as the time point when the transition period after surgery is over and the liver resumes normal growth, rates at which hepatocytes are involved in proliferation, etc. We then use the model to determine and quantify otherwise unobservable metabolic properties of livers.
Tilting Saturn without tilting Jupiter Constraints on giant planet migration ; The migration and encounter histories of the giant planets in our Solar System can be constrained by the obliquities of Jupiter and Saturn. We have performed secular simulations with imposed migration and Nbody simulations with planetesimals to study the expected obliquity distribution of migrating planets with initial conditions resembling those of the smooth migration model, the resonant Nice model and two models with five giant planets initially in resonance one compact and one loose configuration. For smooth migration, the secular spinorbit resonance mechanism can tilt Saturn's spin axis to the current obliquity if the product of the migration time scale and the orbital inclinations is sufficiently large exceeding 30 Myr deg. For the resonant Nice model with imposed migration, it is difficult to reproduce today's obliquity values, because the compactness of the initial system raises the frequency that tilts Saturn above the spin precession frequency of Jupiter, causing a Jupiter spinorbit resonance crossing. Migration time scales sufficiently long to tilt Saturn generally suffice to tilt Jupiter more than is observed. The full Nbody simulations tell a somewhat different story, with Jupiter generally being tilted as often as Saturn, but on average having a higher obliquity. The main obstacle is the final orbital spacing of the giant planets, coupled with the tail of Neptune's migration. The resonant Nice case is barely able to simultaneously reproduce the orbital and spin properties of the giant planets, with a probability 0.15. The loose five planet model is unable to match all our constraints probability 0.08. The compact five planet model has the highest chance of matching the orbital and obliquity constraints simultaneously probability 0.3.
Thresholded Power Law Size Distributions of Instabilities in Astrophysics ; Power lawlike size distributions are ubiquitous in astrophysical instabilities. There are at least four natural effects that cause deviations from ideal power law size distributions, which we model here in a generalized way 1 a physical threshold of an instability; 2 incomplete sampling of the smallest events below a threshold x0; 3 contamination by an eventunrelated background xb; and 4 truncation effects at the largest events due to a finite system size. These effects can be modeled in simplest terms with a thresholded power law distribution function also called generalized Pareto type II or Lomax distribution, Nx dx propto xx0a dx, where x0 0 is positive for a threshold effect, while x0 0 is negative for background contamination. We analytically derive the functional shape of this thresholded power law distribution function from an exponentialgrowth evolution model, which produces avalanches only when a disturbance exceeds a critical threshold x0. We apply the thresholded power law distribution function to terrestrial, solar HXRBS, BATSE, RHESSI, and stellar flare Kepler data sets. We find that the thresholded power law model provides an adequate fit to most of the observed data. Major advantages of this model are the automated choice of the power law fitting range, diagnostics of background contamination, physical inastability thresholds, instrumental detection thresholds, and finite system size limits. When testing selforganized criticality models, which predict ideal power laws, we suggest to include these natural truncation effects.
Late cosmic acceleration in a vectorGaussBonnet gravity model ; In this work we study a general vectortensor model of dark energy with a GaussBonnet term coupled to a vector field and without explicit potential terms. Considering a spatially flat FRW type universe and a vector field without spatial components, the cosmological evolution is analysed from the field equations of this model, considering two sets of parameters. In this context, we have shown that it is possible to obtain an accelerated expansion phase of the universe, since the equation state parameter w satisfies the restriction 1w13 for suitable values of model parameters. Further, analytical expressions for the Hubble parameter H, equation state parameter w and the invariant scalar phi are obtained. We also find that the square of the speed of sound is negative for all values of redshift, therefore, the model presented here shows a sign of instability under small perturbations. We finally perform an analysis using Hz observational data and we find that for the free parameter xi in the interval 23.9, 3.46times 105, at 99.73 C.L. and fixing eta 1 and omega 14, the model has a good fit to the data.
Structured populations with distributed recruitment from PDE to delay formulation ; In this work first we consider a physiologically structured population model with a distributed recruitment process. That is, our model allows newly recruited individuals to enter the population at all possible individual states, in principle. The model can be naturally formulated as a first order partial integrodifferential equation, and it has been studied extensively. In particular, it is wellposed on the biologically relevant state space of Lebesgue integrable functions. We also formulate a delayed integral equation renewal equation for the distributed birth rate of the population. We aim to illustrate the connection between the partial integrodifferential and the delayed integral equation formulation of the model utilising a recent spectral theoretic result. In particular, we consider the equivalence of the steady state problems in the two different formulations, which then leads us to characterise irreducibility of the semigroup governing the linear partial integrodifferential equation. Furthermore, using the method of characteristics, we investigate the connection between the time dependent problems. In particular, we prove that any nonnegative solution of the delayed integral equation determines a nonnegative solution of the partial differential equation and vice versa. The results obtained for the particular distributed states at birth model then lead us to present some very general results, which establish the equivalence between a general class of partial differential and delay equation, modelling physiologically structured populations.
An introduction to the NMPCGraph as general schema for causal modelling of nonlinear, multivariate, dynamic, and recursive systems with focus on timeseries prediction ; While the disciplines of physics and engineering sciences in many cases have taken advantage from accurate timeseries prediction of system behaviour by applying ordinary differential equation systems upon precise basic physical laws such approach hardly could be adopted by other scientific disciplines where precise mathematical basic laws are unknown. A new modelling schema, the NMPCgraph, opens the possibility of interdisciplinary and generic nonlinear, multivariate, dynamic, and recursive causal modelling in domains where basic laws are only known as qualitative relationships among parameters while their precise mathematical nature remains undisclosed at modelling time. The symbolism of NMPCgraph is kept simple and suited for analysts without advanced mathematical skills. This article presents the definition of the NMPCgraph modelling method and its six component types. Further, it shows how to solve the inverse problem of deriving a nonlinear ordinary differential equation system from any NMPCgraph in conjunction with historic calibration data by means of machine learning. This article further discusses how such a derived NMPCmodel can be used for hypothesis testing and timeseries prediction with the expectation of gaining prediction accuracy in comparison to conventional prediction methods.
Sampling Geometric Inhomogeneous Random Graphs in Linear Time ; Realworld networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for realworld networks shifted from classic models without geometry, such as ChungLu random graphs, to modern geometrybased models, such as hyperbolic random graphs. With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we use a generalization that we call geometric inhomogeneous random graphs GIRGs. Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler specifically, we avoid hyperbolic cosines, while preserving the qualitative behaviour of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies. We prove the following fundamental structural and algorithmic results on GIRGs. 1 As our main contribution we provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the bestknown sampling algorithm for hyperbolic random graphs by a substantial factor On0.5. 2 We establish that GIRGs have clustering coefficients in Omega1, 3 we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and 4 we show how to compress GIRGs using an expected linear number of bits.
Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae ; Traditional cosmological inference using Type Ia supernovae SNeIa have used stretch and colorcorrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion for the resulting relative luminosity. As systematics become the main contributors to the error budget, it has become imperative to expand supernova cosmology analyses to include a more general likelihood to model systematics to remove biases with losses in precision. To illustrate an example likelihood analysis, we use a simple model of two populations with a relative luminosity shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. Treating observationally viable twopopulation mock data using a onepopulation model results in an inferred dark energy equation of state parameter w that is biased by roughly 2 times its statistical error for a sample of N gtrsim 2500 SNeIa. Modeling the twopopulation data with a twopopulation model removes this bias at a cost of an approximately sim20 increase in the statistical constraint on w. These significant biases can be realized even if the support for two underlying SNeIa populations, in the form of model selection criteria, is inconclusive. With the current observationallyestimated difference in the two proposed populations, a sample of N gtrsim 10,000 SNeIa is necessary to yield conclusive evidence of two populations.
A Deep Structured Model with RadiusMargin Bound for 3D Human Activity Recognition ; Understanding human activity is very challenging even with the recently developed 3Ddepth sensors. To solve this problem, this work investigates a novel deep structured model, which adaptively decomposes an activity instance into temporal parts using the convolutional neural networks CNNs. Our model advances the traditional deep learning approaches in two aspects. First, we incorporate latent temporal structure into the deep model, accounting for large temporal variations of diverse human activities. In particular, we utilize the latent variables to decompose the input activity into a number of temporally segmented subactivities, and accordingly feed them into the parts i.e. subnetworks of the deep architecture. Second, we incorporate a radiusmargin bound as a regularization term into our deep model, which effectively improves the generalization performance for classification. For model training, we propose a principled learning algorithm that iteratively i discovers the optimal latent variables i.e. the ways of activity decomposition for all training instances, ii updates the classifiers based on the generated features, and iii updates the parameters of multilayer neural networks. In the experiments, our approach is validated on several complex scenarios for human activity recognition and demonstrates superior performances over other stateoftheart approaches.
Study of Majorana Fermionic Dark Matter ; We construct a generic model of Majorana fermionic dark matter DM. Starting with two Weyl spinor multiplets eta1,2sim I,mp Y coupled to the Standard Model SM Higgs, six additional Weyl spinor multiplets with Ipm 12, pmYpm 12 are needed in general. It has 13 parameters in total, five mass parameters and eight Yukawa couplings. The DM sector of the minimal supersymmetric standard model MSSM is a special case of the model with I,Y12,12. Therefore, this model can be viewed as an extension of the neutralino DM sector. We consider three typical cases the neutralinolike, the reduced and the extended cases. For each case, we survey the DM mass mchi in the range of 1,2500 GeV by random sampling from the model parameter space and study the constraints from the observed DM relic density, the direct search of LUX, XENON100 and PICO experiments, and the indirect search of FermiLAT data. We investigate the interplay of these constraints and the differences among these cases. It is found that the direct detection of spinindependent DM scattering off nuclei and the indirect detection of DM annihilation to WW channel are more sensitive to the DM searches in the near future. The allowed mass for finding tilde H, tilde B, tilde W and non neutralinolike DM particles and the predictions on langlesigma chichirightarrow ZZ, ZH, tbar t vrangle in the indirect search are given.
Laboratory light scattering from regolith surface and simulation of data by Hapke model ; The small atmosphereless objects of our solar system, such as asteroids, the moon are covered by layer of dust particles known as regolith, formed by meteoritic impact. The light scattering studies of such dust layer by laboratory experiment and numerical simulation are two important tools to investigate their physical properties. In the present work, the light scattered from a layer of dust particles, containing 0.3mum Al2O3 at wavelength 632.8 nm is analysed. This work has been performed by using a light scattering instrument 'ellipsometer', at the Department of Physics, Assam Universiy, Silchar, India. Through this experiment, we generated in laboratory the photometric and polarimetric phase curves of light scattered from such a layer. In order to numerically simulate this data, we used Hapke's model combined with Mie's single particle scattering properties. The perpendicular and parallel components of single particle albedo and the phase function were derived from Mie theory. By using the Hapke's model combined with Mie theory, the physical properties of the dust grain such as grain size, optical constant n,k and wavelength can be studied through this scheme. In literature, till today no theoretical model to represent polarisation caused due to scattering from rough surface is available, which can successfully explain the scattering process. So the main objective of this work is to develop a model which can theoretically estimate polarisation as caused due to scattering from rough surface and also to validate our model with the laboratory data generated in the present work.
Study of parametrized dark energy models with a general noncanonical scalar field ; In this paper, we have considered various dark energy models in the framework of a noncanonical scalar field with a Lagrangian density of the form cal Lphi, XfphiXleftfracXM4Plrightalpha 1 Vphi, which provides the standard canonical scalar field model for alpha1 and fphi1. In this particular noncanonical scalar field model, we have carried out the analysis for alpha2. We have then obtained cosmological solutions for constant as well as variable equation of state parameter omegaphiz for dark energy. We have also performed the data analysis for three different functional forms of omegaphiz by using the combination of SN Ia, BAO and CMB datasets. We have found that for all the choices of omegaphiz, the SN Ia CMBBAO dataset favors the past decelerated and recent accelerated expansion phase of the universe. Furthermore, using the combined dataset, we have observed that the reconstructed results of omegaphiz and qz are almost choice independent and the resulting cosmological scenarios are in good agreement with the LambdaCDM model within the 1sigma confidence contour. We have also derived the form of the potentials for each model and the resulting potentials are found to be a quartic potential for constant omegaphi and a polynomial in phi for variable omegaphi.
Probing the magnetic field structure in Sgr A on Black Hole Horizon Scales with Polarized Radiative Transfer Simulations ; Magnetic fields are believed to drive accretion and relativistic jets in black hole accretion systems, but the magneticfield structure that controls these phenomena remains uncertain. We perform general relativistic GR polarized radiative transfer of timedependent threedimensional GR magnetohydrodynamical MHD simulations to model thermal synchrotron emission from the Galactic Center source Sagittarius Aast Sgr A. We compare our results to new polarimetry measurements by the Event Horizon Telescope EHT and show how polarization in the visibility Fourier domain distinguishes and constrains accretion flow models with different magnetic field structures. These include models with smallscale fields in disks driven by the magnetorotational instability MRI as well as models with largescale ordered fields in magneticallyarrested disks MAD. We also consider different electron temperature and jet massloading prescriptions that control the brightness of the disk, funnelwall jet, and BlandfordZnajekdriven funnel jet. Our comparisons between the simulations and observations favor models with ordered magnetic fields near the black hole event horizon in Sgr A, although both disk and jetdominated emission can satisfactorily explain most of the current EHT data. We show that stronger model constraints should be possible with upcoming circular polarization and higher frequency 349rm GHz measurements.
An Axiomatic and an AverageCase Analysis of Algorithms and Heuristics for Metric Properties of Graphs ; In recent years, researchers proposed several algorithms that compute metric quantities of realworld complex networks, and that are very efficient in practice, although there is no worstcase guarantee. In this work, we propose an axiomatic framework to analyze the performances of these algorithms, by proving that they are efficient on the class of graphs satisfying certain axioms. Furthermore, we prove that the axioms are verified asymptotically almost surely by several probabilistic models that generate power law random graphs, such as the In recent years, researchers proposed several algorithms that compute metric quantities of realworld complex networks, and that are very efficient in practice, although there is no worstcase guarantee. In this work, we propose an axiomatic framework to analyze the performances of these algorithms, by proving that they are efficient on the class of graphs satisfying certain properties. Furthermore, we prove that these properties are verified asymptotically almost surely by several probabilistic models that generate power law random graphs, such as the Configuration Model, the ChungLu model, and the NorrosReittu model. Thus, our results imply averagecase analyses in these models. For example, in our framework, existing algorithms can compute the diameter and the radius of a graph in subquadratic time, and sometimes even in time n1o1. Moreover, in some regimes, it is possible to compute the k most central vertices according to closeness centrality in subquadratic time, and to design a distance oracle with sublinear query time and subquadratic space occupancy. In the worst case, it is impossible to obtain comparable results for any of these problems, unless widelybelieved conjectures are false.
A coupled 2times2D BabcockLeighton solar dynamo model. II. Reference dynamo solutions ; In this paper we complete the presentation of a new hybrid 2times2D flux transport dynamo FTD model of the solar cycle based on the BabcockLeighton mechanism of poloidal magnetic field regeneration via the surface decay of bipolar magnetic regions BMRs. This hybrid model is constructed by allowing the surface flux transport SFT simulation described in Lemerle et al. 2015 to provide the poloidal source term to an axisymmetric FTD simulation defined in a meridional plane, which in turn generates the BMRs required by the SFT. A key aspect of this coupling is the definition of an emergence function describing the probability of BMR emergence as a function of the spatial distribution of the internal axisymmetric magnetic field. We use a genetic algorithm to calibrate this function, together with other model parameters, against observed cycle 21 emergence data. We present a reference dynamo solution reproducing many solar cycle characteristics, including good hemispheric coupling, phase relationship between the surface dipole and the BMRgenerating internal field, and correlation between dipole strength at cycle maximum and peak amplitude of the next cycle. The saturation of the cycle amplitude takes place through the quenching of the BMR tilt as a function of the internal field. The observed statistical scatter about the mean BMR tilt, built into the model, acts as a source of stochasticity which dominates amplitude fluctuations. The model thus can produce Daltonlike epochs of strongly suppressed cycle amplitude lasting a few cycles and can even shut off entirely following an unfavorable sequence of emergence events.
Captioning Images with Diverse Objects ; Recent captioning models are limited in their ability to scale and describe concepts unseen in paired imagetext corpora. We propose the Novel Object Captioner NOC, a deep visual semantic captioning model that can describe a large number of object categories not present in existing imagecaption datasets. Our model takes advantage of external sources labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of imagecaption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO imagecaption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.
Puzzling initial conditions in the Rhct model ; In recent years, some studies have drawn attention to the lack of largeangle correlations in the observed cosmic microwave background CMB temperature anisotropies with respect to that predicted within the standard LambdaCDM model. Lately, it has been argued that such a lack of correlations could be explained in the framework of the socalled Rhct model without inflation. The aim of this work is to study whether there is a mechanism to generate, through a quantum field theory, the primordial power spectrum presented by these authors. Specifically, we consider two different scenarios first, we assume a scalar field dominating the early Universe in the Rhct cosmological model, and second, we deal with the possibility of adding an early inflationary phase to the mentioned model. During the analysis of the consistency between the predicted and observed amplitudes of the CMB temperature anisotropies in both scenarios, we run into deep issues which indicate that it is not clear how to characterize the primordial quantum perturbations within the Rhct model.
FrobeniusChernSimons gauge theory ; Given a set of differential forms on an odddimensional noncommutative manifold valued in an internal associative algebra H, we show that the most general cubic covariant Hamiltonian action, without mass terms, is controlled by an Z2graded associative algebra F with a graded symmetric nondegenerate bilinear form. The resulting class of models provide a natural generalization of the FrobeniusChernSimons model FCS that was proposed in arXiv1505.04957 as an offshell formulation of the minimal bosonic fourdimensional higher spin gravity theory. If F is unital and the Z2grading is induced from a Klein operator that is outer to a proper Frobenius subalgebra, then the action can be written on a form akin to topological open string field theory in terms of a superconnection valued in the direct product of H and F. We give a new model of this type based on a twisting of CZ2 x Z4, which leads to selfdual complexified gauge fields on AdS4. If F is 3graded, the FCS model can be truncated consistently as to zeroform constraints onshell. Two examples thereof are a twisting of CZ23 that yields the original model, and the Clifford algebra Cl2n which provides an FCS formulation of the bosonic KonsteinVasiliev model with gauge algebra hu4n1,0.
General Automatic Human Shape and Motion Capture Using Volumetric Contour Cues ; Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension andor body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semiautomatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation skeleton, volumetric shape, appearance, and optionally a body surface and estimates the actor's motion from multiview video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edgebased alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multiview sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way.
Measuring and Modeling Bipartite Graphs with Community Structure ; Network science is a powerful tool for analyzing complex systems in fields ranging from sociology to engineering to biology. This paper is focused on generative models of largescale bipartite graphs, also known as twoway graphs or twomode networks. We propose two generative models that can be easily tuned to reproduce the characteristics of realworld networks, not just qualitatively, but quantitatively. The characteristics we consider are the degree distributions and the metamorphosis coefficient. The metamorphosis coefficient, a bipartite analogue of the clustering coefficient, is the proportion of lengththree paths that participate in lengthfour cycles. Having a high metamorphosis coefficient is a necessary condition for closeknit community structure. We define edge, node, and degreewise metamorphosis coefficients, enabling a more detailed understanding of the bipartite connectivity that is not explained by degree distribution alone. Our first model, bipartite ChungLu CL, is able to reproduce realworld degree distributions, and our second model, bipartite block twolevel ErdosR'enyi BTER, reproduces both the degree distributions as well as the degreewise metamorphosis coefficients. We demonstrate the effectiveness of these models on several realworld data sets.
Reconstruction of Static Black Hole Images Using Simple Geometric Forms ; General Relativity predicts that the emission close to a black hole must be lensed by its strong gravitational field, illuminating the last photon orbit. This results in a dark circular area known as the black hole 'shadow'. The Event Horizon Telescope EHT is a submm VLBI network capable of Schwarzschildradius resolution on Sagittarius A or Sgr A, the 4 million solar mass black hole at the Galactic Center. The goals of the Sgr A observations include resolving and measuring the details of its morphology. However, EHT data are sparse in the visibility domain, complicating reliable detailed image reconstruction. Therefore, direct pixel imaging should be complemented by other approaches. Using simulated EHT data from a black hole emission model we consider an approach to Sgr A image reconstruction based on a simple and computationally efficient analytical model that produces images similar to the synthetic ones. The model consists of an eccentric ring with a brightness gradient and a twodimensional Gaussian. These elemental forms have closed functional representations in the visibility domain, which lowers the computational overhead of fitting the model to the EHT observations. For model fitting we use a version of the Markov chain MonteCarlo MCMC algorithm based on the MetropolisHastings sampler with replica exchange. Over a series of simulations we demonstrate that our model can be used for determining geometric measures of a black hole, thus providing information on the shadow size, linking General Relativity with accretion theory.
Dclass of dark energy against CDM in BransDicke cosmology ; Three general models of dynamical interacting dark energy Dclass are investigated in the context of BransDicke cosmology. All cosmological quantities such as equation of state parameters, deceleration parameters, Hubble function, and the density ratio are calculated as a function of redshift parameter. The most important part of this paper is fitting of models to the observational data SNIaBAOAOmh2. We obtain a table of best fit value of parameters and report chitot2dof and Akaike Information Criterion AIC for each model. By these diagnostic tools, we find that some models have no chance against LambdaCDM and some e.g. mathcalBDDC2 and mathcalBDDA render the best fit quality. Specially, the value of AIC analysis and figures show that the interacting mathcalBDDC2 model fit perfectly with overall data and reveals a strong evidence in favor of this model, against LambdaCDM.
Probabilistic Population Projections for Countries with Generalized HIVAIDS Epidemics ; The United Nations UN issued official probabilistic population projections for all countries to 2100 in July 2015. This was done by simulating future levels of total fertility and life expectancy from Bayesian hierarchical models, and combining the results using a standard cohortcomponent projection method. The 40 countries with generalized HIVAIDS epidemics were treated differently from others, in that the projections used the highly multistate SpectrumEPP model, a complex 15compartment model that was designed for shortterm projections of quantities relevant to policy for the epidemic. Here we propose a simpler approach that is more compatible with the existing UN probabilistic projection methodology for other countries. Changes in life expectancy are projected probabilistically using a simple time series regression model on current life expectancy, HIV prevalence and ART coverage. These are then converted to age and sexspecific mortality rates using a new family of model life tables designed for countries with HIVAIDS epidemics that reproduces the characteristic hump in middle adult mortality. These are then input to the standard cohortcomponent method, as for other countries. The method performed well in an outofsample crossvalidation experiment. It gives similar population projections to SpectrumEPP in the short run, while being simpler and avoiding multistate modeling.
Behavior of the maximum likelihood in quantum state tomography ; Quantum state tomography on a ddimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model i.e., the size of the density matrix. Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint rho geq 0, quantum state space does not generally satisfy local asymptotic normality, meaning the classical null theory for the loglikelihood ratio the Wilks theorem should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of local asymptotic normality, metricprojected local asymptotic normality, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Real and redshiftspace halo clustering in fR cosmologies ; We present twopoint correlation function statistics of the mass and the halos in the chameleon fR modified gravity scenario using a series of large volume Nbody simulations. Three distinct variations of fR are considered F4, F5 and F6 and compared to a fiducial LambdaCDM model in the redshift range z in 0,1. We find that the matter clustering is indistinguishable for all models except for F4, which shows a significantly steeper slope. The ratio of the redshift to realspace correlation function at scales 20 h1 mathrmMpc agrees with the linear General Relativity GR Kaiser formula for the viable fR models considered. We consider three halo populations characterized by spatial abundances comparable to that of luminous red galaxies LRGs and galaxy clusters. The redshiftspace halo correlation functions of F4 and F5 deviate significantly from LambdaCDM at intermediate and high redshift, as the fR halo bias is smaller or equal to that of the LambdaCDM case. Finally we introduce a new model independent clustering statistic to distinguish fR from GR the relative halo clustering ratio mathcalR. The sampling required to adequately reduce the scatter in mathcalR will be available with the advent of the next generation galaxy redshift surveys. This will foster a prospective avenue to obtain largely modelindependent cosmological constraints on this class of modified gravity models.
A generalized prioritybased model for smartphone screen touches ; The distribution of intervals between human actions such as email posts or keyboard strokes demonstrates distinct properties at short vs long time scales. For instance, at long time scales, which are presumably controlled by complex process such as planning and decision making, it has been shown that those interevent intervals follow a scaleinvariant or powerlaw distribution. In contrast, at shorter timescales which are governed by different process such as sensorimotor skill they do not follow the same distribution and little do we know how they relate to the scaleinvariant pattern. Here, we analyzed 9 millions intervals between smartphone screen touches of 84 individuals which span several orders of magnitudes from milliseconds to hours. To capture these intervals, we extend a prioritybased generative model to smartphone touching events. At shorttime scale, the model is governed by refractory effects, while at longer time scales, the intertouch intervals are governed by the priority difference between smartphone tasks and other tasks. The flexibility of the model allows to capture interindividual variations at short and long time scales while its tractability enables efficient model fitting. According to our model, each individual has a specific powerlow exponent which is tightly related to the effective refractory time constant suggesting that motor processes which influence the fast actions are related to the higher cognitive processes governing the longer interevent intervals.
Multiscale modeling of the elastic behavior of architectured and nanostructured CuNb composite wires ; Nanostructured and architectured copper niobium composite wires are excellent candidates for the generation of intense pulsed magnetic fields 90T as they combine both high strength and high electrical conductivity. Multiscaled CuNb wires are fabricated by accumulative drawing and bundling a severe plastic deformation technique, leading to a multiscale, architectured, and nanostructured microstructure exhibiting a strong fiber crystallographic texture and elongated grain shapes along the wire axis. This paper presents a comprehensive study of the effective elastic behavior of this composite material by three multiscale models accounting for different microstructural contents two meanfield models and a fullfield finite element model. As the specimens exhibit many characteristic scales, several scale transition steps are carried out iteratively from the grain scale to the macroscale. The general agreement among the model responses allows suggesting the best strategy to estimate the effective behavior of CuNb wires and save computational time. The importance of crystallographical and morphological textures in various cases is discussed. Finally, the models are validated by available experimental data with a good agreement.
Aspects of fermion dynamics from Lorentz symmetry violation ; In this thesis we are interested in understanding how Lorentz symmetry violation can affect some features of fermion dynamics and, perhaps, help to solve some wellknown problems in particle physics, such as the origin of neutrino masses and oscillations. Firstly, we consider two LorentzInvarianceViolating LIV models and investigate the possibility of generating masses and oscillations dynamically for both Dirac and Majorana neutrinos, using nonperturbative methods, such as the SchwingerDyson and the effective potential approaches. In our studies, Lorentz symmetric models are extended by the inclusion of higherorder LIV operators, which improve the convergence of loop integrals and introduce a natural mass scale to the theories. We then present how Lorentz invariance can be recovered, for both models, after quantisation, in such a way that the dynamical masses and mixing are the only quantum effects that remain finite. Additionally, we study how matter fields, especially fermions, behave when coupled to two modified gravity models. Such modified gravity models break the 4dimensional diffeomorphism invariance and, consequently, induce local Lorentz violation. In particular, we consider HoravaLifshitz gravity, which presents an improved ultraviolet behaviour when compared to General Relativity GR, and thus addresses a fundamental problem in physics the nonperturbativerenormalisability of the theory of GR. We calculate the LIV oneloop corrections to the matter sector dispersion relations, after integration over graviton components, and show that, by imposing reasonable constraints on the energy scales of our gravity models, our results are consistent with the current bounds on Lorentz symmetry violation.
What hadron collider is required to discover or falsify natural supersymmetry ; Weak scale supersymmetry SUSY remains a compelling extension of the Standard Model because it stabilizes the quantum corrections to the Higgs and W, Z boson masses. In natural SUSY models these corrections are, by definition, never much larger than the corresponding masses. Natural SUSY models all have an upper limit on the gluino mass, too high to lead to observable signals even at the high luminosity LHC. However, in models with gaugino mass unification, the wino is sufficiently light that supersymmetry discovery is possible in other channels over the entire natural SUSY parameter space with no worse than 3 finetuning. Here, we examine the SUSY reach in more general models with and without gaugino mass unification specifically, natural generalized mirage mediation, and show that the high energy LHC HELHC, a pp collider with sqrts33 TeV, will be able to detect the SUSY signal over the entire allowed mass range. Thus, HELHC would either discover or conclusively falsify natural SUSY with better than 3 finetuning using a conservative measure that allows for correlations among the model parameters.
PhotonAxion Conversion, Magnetic Field Configuration, and Polarization of Photons ; We study the evolution of photon polarization during the photonaxion conversion process with focusing on the magnetic field configuration dependence. Most previous studies have been carried out in a conventional model where a network of magnetic domains is considered and each domain has a constant magnetic field. We investigate a more general model where a network of domains is still assumed, but each domain has a helical magnetic field. We find that the asymptotic behavior does not depend on the configuration of magnetic fields. Remarkably, we analytically obtain the asymptotic values of the variance of polarization in the conventional model. When the helicity is small, we show that there appears the damped oscillating behavior in the early stage of evolution. Moreover, we see that the constraints on the axion coupling and the cosmological magnetic fields using polarization observations are affected by the magnetic field configuration. This is because the different transient behavior of polarization dynamics is caused by the different magnetic field configuration. Recently, C. Wang and D. Lai, J. Cosmol. Astropart. Phys. 06 2016 006. claimed that the photonaxion conversion in helical model behaves peculiarly. However, our helical model gives much closer predictions to the conventional discontinuous magnetic field configuration model.
Propensity score prediction for electronic healthcare databases using Super Learner and Highdimensional Propensity Score Methods ; The optimal learner for prediction modeling varies depending on the underlying datagenerating distribution. Super Learner SL is a generic ensemble learning algorithm that uses crossvalidation to select among a library of candidate prediction models. The SL is not restricted to a single prediction model, but uses the strengths of a variety of learning algorithms to adapt to different databases. While the SL has been shown to perform well in a number of settings, it has not been thoroughly evaluated in large electronic healthcare databases that are common in pharmacoepidemiology and comparative effectiveness research. In this study, we applied and evaluated the performance of the SL in its ability to predict treatment assignment using three electronic healthcare databases. We considered a library of algorithms that consisted of both nonparametric and parametric models. We also considered a novel strategy for prediction modeling that combines the SL with the highdimensional propensity score hdPS variable selection algorithm. Predictive performance was assessed using three metrics the negative loglikelihood, area under the curve AUC, and time complexity. Results showed that the best individual algorithm, in terms of predictive performance, varied across datasets. The SL was able to adapt to the given dataset and optimize predictive performance relative to any individual learner. Combining the SL with the hdPS was the most consistent prediction method and may be promising for PS estimation and prediction modeling in electronic healthcare databases.
The jetdisk symbiosis without maximal jets 1D hydrodynamical jets revisited ; In this work we discuss the recent criticism by Zdziarski of the maximal jet model derived in Falcke Biermann 1995. We agree with Zdziarski that in general a jet's internal energy is not bounded by its restmass energy density. We describe the effects of the mistake on conclusions that have been made using the maximal jet model and show when a maximal jet is an appropriate assumption. The maximal jet model was used to derive a 1D hydrodynamical model of jets in agnjet, a model that does multiwavelength fitting of quiescenthard state Xray binaries and lowluminosity active galactic nuclei. We correct algebraic mistakes made in the derivation of the 1D Euler equation and relax the maximal jet assumption. We show that the corrections cause minor differences as long as the jet has a small opening angle and a small terminal Lorentz factor. We find that the major conclusion from the maximal jet model, the jetdisk symbiosis, can be generally applied to astrophysical jets. We also show that isothermal jets are required to match the flat radio spectra seen in lowluminosity Xray binaries and active galactic nuclei, in agreement with other works.
Majoron Dark Matter From a Spontaneous Inverse Seesaw Model ; The generation of neutrino masses by inverse seesaw mechanisms has advantages over other seesaw models since the potential new physics can be produced at the TeV scale. We propose a model that generates the inverse seesaw mechanism via spontaneous breaking of the lepton number, by extending the Standard Model with two scalar singlets and two fermion singlets both charged under lepton number. The model gives rise to a massless Majoron and a massive pseudoscalar which we dub as massive Majoron, which corresponds to the NambuGoldstone boson of the breaking of lepton number. If the massive Majoron is stable in cosmological time, it might play the role of a suitable Dark Matter candidate. In this scenario, we examine the model with a massive Majoron in the keV range. In this regime, its decay mode to neutrinos is sensitive to the ratio between the vevs of the new scalars omega, and it vanishes when omega simeq sqrt23, which is valid within a large region in the parameter space. On the other hand, the cosmological lifetime for the Dark Matter candidate places constraints on its mass via scalar decays. In addition, simple mechanisms that explain the Dark Matter relic abundance within this context and plausible modifications to the proposed setup are briefly discussed.
An algorithm for removing sensitive information application to raceindependent recidivism prediction ; Predictive modeling is increasingly being employed to assist human decisionmakers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings such as sentencing, hiring, policing, college admissions, and parole decisions is the perceived neutrality of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our framework is general enough to accommodate data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pretrial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce raceneutral predictions of rearrest. In the process, we demonstrate that a common approach to creating raceneutral models omitting race as a covariate still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.
A circuitpreserving mapping from multilevel to Boolean dynamics ; Many discrete models of biological networks rely exclusively on Boolean variables and many tools and theorems are available for analysis of strictly Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a onetoone, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks most notably, it introduces vast regions of nonadmissible states that have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a onetoone transposition of multilevel trajectories, however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the nonadmissible regions and, at the expense of apparently more complicated, parallel trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. As a demonstration of the usability of our method, we apply it to construct a new Boolean counterexample to the wellknown conjecture that a local negative circuit is necessary to generate sustained oscillations. This result illustrates the general relevance of our method for the study of multilevel logical models.
SpatiallyDependent Multiple Testing Under Model Misspecification, with Application to Detection of Anthropogenic Influence on Extreme Climate Events ; The Weather Risk Attribution Forecast WRAF is a forecasting tool that uses output from global climate models to make simultaneous attribution statements about whether and how greenhouse gas emissions have contributed to extreme weather across the globe. However, in conducting a large number of simultaneous hypothesis tests, the WRAF is prone to identifying false discoveries. A common technique for addressing this multiple testing problem is to adjust the procedure in a way that controls the proportion of true null hypotheses that are incorrectly rejected, or the false discovery rate FDR. Unfortunately, generic FDR procedures suffer from low power when the hypotheses are dependent, and techniques designed to account for dependence are sensitive to misspecification of the underlying statistical model. In this paper, we develop a Bayesian decision theoretic approach for dependent multiple testing and a nonparametric hierarchical statistical model that flexibly controls false discovery and is robust to model misspecification. We illustrate the robustness of our procedure to model error with a simulation study, using a framework that accounts for generic spatial dependence and allows the practitioner to flexibly specify the decision criteria. Finally, we apply our procedure to several seasonal forecasts and discuss implementation for the WRAF workflow.
NLO electroweak corrections in general scalar singlet models ; If no new physics signals are found, in the coming years, at the Large Hadron Collider Run2, an increase in precision of the Higgs couplings measurements will shift the dicussion to the effects of higher order corrections. In Beyond the Standard Model BSM theories this may become the only tool to probe new physics. Extensions of the Standard Model SM with several scalar singlets may address several of its problems, namely to explain dark matter, the matterantimatter asymmetry, or to improve the stability of the SM up to the Planck scale. In this work we propose a general framework to calculate one loopcorrections in BSM models with an arbitrary number of scalar singlets. We then apply our method to a real and to a complex scalar singlet models. We assess the importance of the oneloop radiative corrections first by computing them for a tree level mixing sum constraint, and then for the main Higgs production process gg to H. We conclude that, for the currently allowed parameter space of these models, the corrections can be at most a few percent. Notably, a nonzero correction can survive when dark matter is present, in the SMlike limit of the Higgs couplings to other SM particles.
Generalized model for the diffusion of solvents in glassy polymers From Fickian to super Case II ; The diffusion of small solvent molecules in glassy polymers may take on a variety of different forms. Fickian, anomalous, Case II, and super Case II diffusion have all been observed, and theoretical models exist that describe each specific type of behavior. Here we present a single generalized kinetic model capable of yielding all these different types of diffusion on the basis of just two parameters. The principal determinant of the type of diffusion is observed to be a dimensionless parameter, gamma, that describes the influence of solventinduced swelling in lowering the potential barriers separating available solvent sites. A second parameter, eta, which characterizes the effect of the solvent in reducing the potential energy of a solvent molecule when at rest at an available site, only influences the type of diffusion to a lesser extent. The theoretical analysis does not include any effects that are explicitly nonlocal in time, an example of which is the inclusion of polymer viscosity in the ThomasWindle model; it thus represents a variant of Fick's second law utilizing a concentrationdependent diffusivity when eta is small. To check the significance of timedelayed swelling, a simulation was performed of a modified model that contained a historydependent term. The results were found to be very similar to those in the timelocal model.
Disordered statistical physics in low dimensions extremes, glass transition, and localization ; This thesis presents original results in two domains of disordered statistical physics logarithmic correlated Random Energy Models logREMs, and localization transitions in longrange random matrices. In the first part devoted to logREMs, we show how to characterise their common properties and modelspecific data. Then we develop their replica symmetry breaking treatment, which leads to the freezing scenario of their free energy distribution and the general description of their minima process, in terms of decorated Poisson point process. We also report a series of new applications of the Jack polynomials in the exact predictions of some observables in the circular model and its variants. Finally, we present the recent progress on the exact connection between logREMs and the Liouville conformal field theory. The goal of the second part is to introduce and study a new class of banded random matrices, the broadly distributed class, which is characterid an effective sparseness. We will first study a specific model of the class, the Beta Banded random matrices, inspired by an exact mapping to a recently studied statistical model of longrange firstpassage percolationepidemics dynamics. Using analytical arguments based on the mapping and numerics, we show the existence of localization transitions with mobility edges in the stretchexponential parameterregime of the statistical models. Then, using a blockdiagonalization renormalization approach, we argue that such localization transitions occur generically in the broadly distributed class.
A multilevel block building algorithm for fast modeling generalized separable systems ; Datadriven modeling plays an increasingly important role in different areas of engineering. For most of existing methods, such as genetic programming GP, the convergence speed might be too slow for large scale problems with a large number of variables. It has become the bottleneck of GP for practical applications. Fortunately, in many applications, the target models are separable in some sense. In this paper, we analyze different types of separability of some realworld engineering equations and establish a mathematical model of generalized separable system GS system. In order to get the structure of the GS system, a multilevel block building MBB algorithm is proposed, in which the target model is decomposed into a number of blocks, further into minimal blocks and factors. Compare to the conventional GP, MBB can make large reductions to the search space. This makes MBB capable of modeling a complex system. The minimal blocks and factors are optimized and assembled with a global optimization search engine, low dimensional simplex evolution LDSE. An extensive study between the proposed MBB and a stateoftheart datadriven fitting tool, Eureqa, has been presented with several manmade problems, as well as some realworld problems. Test results indicate that the proposed method is more effective and efficient under all the investigated cases.
Counting Markov Equivalence Classes for DAG models on Trees ; DAG models are statistical models satisfying a collection of conditional independence relations encoded by the nonedges of a directed acyclic graph DAG mathcalG. Such models are used to model complex causeeffect systems across a variety of research fields. From observational data alone, a DAG model mathcalG is only recoverable up to Markov equivalence. Combinatorially, two DAGs are Markov equivalent if and only if they have the same underlying undirected graph i.e. skeleton and the same set of the induced subDAGs ito j leftarrow k, known as immoralities. Hence it is of interest to study the number and size of Markov equivalence classes MECs. In a recent paper, the authors introduced a pair of generating functions that enumerate the number of MECs on a fixed skeleton by number of immoralities and by class size, and they studied the complexity of computing these functions. In this paper, we lay the foundation for studying these generating functions by analyzing their structure for trees and other closely related graphs. We describe these polynomials for some important families of graphs including paths, stars, cycles, spider graphs, caterpillars, and complete binary trees. In doing so, we recover important connections to independence polynomials, and extend some classical identities that hold for Fibonacci numbers. We also provide tight lower and upper bounds for the number and size of MECs on any tree. Finally, we use computational methods to show that the number and distribution of high degree nodes in a trianglefree graph dictates the number and size of MECs.
Anomaly Detection and Modeling in 802.11 Wireless Networks ; IEEE 802.11 Wireless Networks are getting more and more popular at university campuses, enterprises, shopping centers, airports and in so many other public places, providing Internet access to a large crowd openly and quickly. The wireless users are also getting more dependent on WiFi technology and therefore demanding more reliability and higher performance for this vital technology. However, due to unstable radio conditions, faulty equipment, and dynamic user behavior among other reasons, there are always unpredictable performance problems in a wireless covered area. Detection and prediction of such problems is of great significance to network managers if they are to alleviate the connectivity issues of the mobile users and provide a higher quality wireless service. This paper aims to improve the management of the 802.11 wireless networks by characterizing and modeling wireless usage patterns in a set of anomalous scenarios that can occur in such networks. We apply timeinvariant Gaussian Mixture Models and timevariant Hidden Markov Models modeling approaches to a dataset generated from a large production network and describe how we use these models for anomaly detection. We then generate several common anomalies on a Testbed network and evaluate the proposed anomaly detection methodologies in a controlled environment. The experimental results of the Testbed show that HMM outperforms GMM and yields a higher anomaly detection ratio and a lower false alarm rate.
Planck 2015 constraints on the nonflat CDM inflation model ; We study Planck 2015 cosmic microwave background CMB anisotropy data using the energy density inhomogeneity power spectrum generated by quantum fluctuations during an early epoch of inflation in the nonflat LambdaCDM model. Unlike earlier analyses of nonflat models, which assumed an inconsistent powerlaw power spectrum of energy density inhomogeneities, we find that the Planck 2015 data alone, and also in conjunction with baryon acoustic oscillation measurements, are reasonably well fit by a closed LambdaCDM model in which spatial curvature contributes a few percent of the current cosmological energy density budget. In this model, the measured Hubble constant and nonrelativistic matter density parameter are in good agreement with values determined using most other data. Depending on parameter values, the closed LambdaCDM model has reduced power, relative to the tilted, spatiallyflat LambdaCDM case, and can partially alleviate the low multipole CMB temperature anisotropy deficit and can help partially reconcile the CMB anisotropy and weak lensing sigma8 constraints, at the expense of somewhat worsening the fit to higher multipole CMB temperature anisotropy data. Our results are interesting but tentative; a more thorough analysis is needed to properly gauge their significance.
Beyond delta Tailoring marked statistics to reveal modified gravity ; Models that seek to explain cosmic acceleration through modifications to General Relativity GR evade stringent Solar System constraints through a restoring, screening mechanism. Downweighting the high density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new marked transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of LambdaCDM and the symmetron and fR modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signaltonoise ratio SNR for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the LambdaCDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from largescale surveys, and give the prospect for a potential detection much more feasible.
Bayesian approach to Spatiotemporally Consistent Simulation of Daily Monsoon Rainfall over India ; Simulation of rainfall over a region for long timesequences can be very useful for planning and policymaking, especially in India where the economy is heavily reliant on monsoon rainfall. However, such simulations should be able to preserve the known spatial and temporal characteristics of rainfall over India. General Circulation Models GCMs are unable to do so, and various rainfall generators designed by hydrologists using stochastic processes like Gaussian Processes are also difficult to apply over the vast and highly diverse landscape of India. In this paper, we explore a series of Bayesian models based on conditional distributions of latent variables that describe weather conditions at specific locations and over the whole country. During parameter estimation from observed data, we use spatiotemporal smoothing using Markov Random Field so that the parameters learnt are spatially and temporally coherent. Also, we use a nonparametric spatial clustering based on Chinese Restaurant Process to identify homogeneous regions, which are utilized by some of the proposed models to improve spatial correlations of the simulated rainfall. The models are able to simulate daily rainfall across India for years, and can also utilize contextual information for conditional simulation. We use two datasets of different spatial resolutions over India, and focus on the period 20002015. We propose a large number of metrics to study the spatiotemporal properties of the simulations by the models, and compare them with the observed data to evaluate the strengths and weaknesses of the models.
Selfsupervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation ; Enabling robots to autonomously navigate complex environments is essential for realworld deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learningbased methods improve as the robot acts in the environment, but are difficult to deploy in the realworld due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes valuebased modelfree methods and modelbased methods, with specific instantiations interpolating between modelfree and modelbased. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms singlestep and Nstep double Qlearning. We also evaluate our approach on a realworld RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, selfsupervised training. Videos of the experiments and code can be found at github.comgkahn13gcg
An Analytical DiffusionExpansion Model for Forbush Decreases Caused by Flux Ropes ; We present an analytical diffusionexpansion Forbush decrease FD model ForbMod which is based on the widely used approach of the initially empty, closed magnetic structure i.e. flux rope which fills up slowly with particles by perpendicular diffusion. The model is restricted to explain only the depression caused by the magnetic structure of the interplanetary coronal mass ejection ICME. We use remote CME observations and a 3D reconstruction method the Graduated Cylindrical Shell method to constrain initial boundary conditions of the FD model and take into account CME evolutionary properties by incorporating flux rope expansion. Several flux rope expansion modes are considered, which can lead to different FD characteristics. In general, the model is qualitatively in agreement with observations, whereas quantitative agreement depends on the diffusion coefficient and the expansion properties interplay of the diffusion and the expansion. A case study was performed to explain the FD observed 2014 May 30. The observed FD was fitted quite well by ForbMod for all expansion modes using only the diffusion coefficient as a free parameter, where the diffusion parameter was found to correspond to expected range of values. Our study shows that in general the model is able to explain the global properties of FD caused by FR and can thus be used to help understand the underlying physics in case studies.
Robustness of shaperestricted regression estimators an envelope perspective ; Classical least squares estimators are wellknown to be robust with respect to moment assumptions concerning the error distribution in a wide variety of finitedimensional statistical problems; generally only a second moment assumption is required for least squares estimators to maintain the same rate of convergence that they would satisfy if the errors were assumed to be Gaussian. In this paper, we give a geometric characterization of the robustness of shaperestricted least squares estimators LSEs to error distributions with an L2,1 moment, in terms of the localized envelopes' of the model. This envelope perspective gives a systematic approach to proving oracle inequalities for the LSEs in shaperestricted regression problems in the random design setting, under a minimal L2,1 moment assumption on the errors. The canonical isotonic and convex regression models, and a more challenging additive regression model with shape constraints are studied in detail. Strikingly enough, in the additive model both the adaptation and robustness properties of the LSE can be preserved, up to error distributions with an L2,1 moment, for estimating the shapeconstrained proxy of the marginal L2 projection of the true regression function. This holds essentially regardless of whether or not the additive model structure is correctly specified. The new envelope perspective goes beyond shape constrained models. Indeed, at a general level, the localized envelopes give a sharp characterization of the convergence rate of the L2 loss of the LSE between the worstcase rate as suggested by the recent work of the authors 25, and the best possible parametric rate.
Solar wind dynamics around a comet A 2D semianalytical kinetic model ; We aim at analytically modelling the solar wind proton trajectories during their interaction with a partially ionised cometary atmosphere, not in terms of bulk properties of the flow but in terms of single particle dynamics. We first derive a generalised gyromotion, in which the electric field is reduced to its motional component. Steadystate is assumed, and simplified models of the cometary density and of the electron fluid are used to express the force experienced by individual solar wind protons during the interaction. A threedimensional 3D analytical expression of the gyration of two interacting plasma beams is obtained. Applying it to a comet case, the force on protons is always perpendicular to their velocity and has an amplitude proportional to 1r2. The solar wind deflection is obtained at any point in space. The resulting picture presents a caustic of intersecting trajectories, and a circular region is found that is completely free of particles. The particles do not lose any kinetic energy and this absence of deceleration, together with the solar wind deflection pattern and the presence of a solar wind ion cavity, is in good agreement with the general results of the Rosetta mission. The qualitative match between the model and the in situ data highlights how dominant the motional electric field is throughout most of the interaction region for the solar wind proton dynamics. The model provides a simple general kinetic description of how momentum is transferred between these two collisionless plasmas. It also shows the potential of this semianalytical model for a systematic quantitative comparison to the data.
Complex System Design with Design Languages Method, Applications and Design Principles ; Graphbased design languages in UML Unified Modeling Language are presented as a method to encode and automate the complete design process and the final optimization of the product or complex system. A design language consists of a vocabulary digital building blocks and a set of rules digital composition knowledge along with an executable sequence of the rules digital encoding of the design process. The rulebased mechanism instantiates a central and consistent global product data structure the socalled design graph. Upon the generation of the abstract central model, the domainspecific engineering models are automatically generated, remotely executed and their results are fedback into the central design model for subsequent design decisions or optimizations. The design languages are manually modeled and automatically executed in a socalled design compiler. Up to now, a variety of product designs in the areas of aerospace, automotive, machinery and consumer products have been successfully accelerated and automated using graphbased design languages. Different design strategies and mechanisms have been identified and applied in the automation of the design processes. Approaches ranging from the automated and declarative processing of constraints, through fractal nested design patterns, to mathematical dimensionbased derivation of the sequence of design actions are used. The existing knowledge for a design determines the global design strategy topdown vs. bottomup. Similaritymechanics in the form of dimensionless invariants are used for evaluation to downsize the solution for an overall complexity reduction. Design patterns, design paradigms form follows function and design strategies divide and conquer from information science are heavily used to structure, manage and handle complexity.
Common Origin of Dirac Neutrino Mass and Freezein Massive Particle Dark Matter ; Motivated by the fact that the origin of tiny Dirac neutrino masses via the standard model Higgs field and nonthermal dark matter populating the Universe via freezein mechanism require tiny dimensionless couplings of similar order of magnitudes sim 1012, we propose a framework that can dynamically generate such couplings in a unified manner. Adopting a flavour symmetric approach based on A4 group, we construct a model where Dirac neutrino coupling to the standard model Higgs and dark matter coupling to its mother particle occur at dimension six level involving the same flavon fields, thereby generating the effective Yukawa coupling of same order of magnitudes. The mother particle for dark matter, a complex scalar singlet, gets thermally produced in the early Universe through Higgs portal couplings followed by its thermal freezeout and then decay into the dark matter candidates giving rise to the freezein dark matter scenario. Some parts of the Higgs portal couplings of the mother particle can also be excluded by collider constraints on invisible decay rate of the standard model like Higgs boson. We show that the correct neutrino oscillation data can be successfully produced in the model which predicts normal hierarchical neutrino mass. The model also predicts the atmospheric angle to be in the lower octant if the Dirac CP phase lies close to the presently preferred maximal value.
KnowledgeAware Conversational Semantic Parsing Over Web Tables ; Conversational semantic parsing over tables requires knowledge acquiring and reasoning abilities, which have not been well explored by current stateoftheart approaches. Motivated by this fact, we propose a knowledgeaware semantic parser to improve parsing performance by integrating various types of knowledge. In this paper, we consider three types of knowledge, including grammar knowledge, expert knowledge, and external resource knowledge. First, grammar knowledge empowers the model to effectively replicate previously generated logical form, which effectively handles the coreference and ellipsis phenomena in conversation Second, based on expert knowledge, we propose a decomposable model, which is more controllable compared with traditional endtoend models that put all the burdens of learning on trialanderror in an endtoend way. Third, external resource knowledge, i.e., provided by a pretrained language model or an entity typing model, is used to improve the representation of question and table for a better semantic understanding. We conduct experiments on the SequentialQA dataset. Results show that our knowledgeaware model outperforms the stateoftheart approaches. Incremental experimental results also prove the usefulness of various knowledge. Further analysis shows that our approach has the ability to derive the meaning representation of a contextdependent utterance by leveraging previously generated outcomes.
QueryEfficient BlackBox Attack by Active Learning ; Deep neural network DNN as a popular machine learning model is found to be vulnerable to adversarial attack. This attack constructs adversarial examples by adding small perturbations to the raw input, while appearing unmodified to human eyes but will be misclassified by a welltrained classifier. In this paper, we focus on the blackbox attack setting where attackers have almost no access to the underlying models. To conduct blackbox attack, a popular approach aims to train a substitute model based on the information queried from the target DNN. The substitute model can then be attacked using existing whitebox attack approaches, and the generated adversarial examples will be used to attack the target DNN. Despite its encouraging results, this approach suffers from poor query efficiency, i.e., attackers usually needs to query a huge amount of times to collect enough information for training an accurate substitute model. To this end, we first utilize stateoftheart whitebox attack methods to generate samples for querying, and then introduce an active learning strategy to significantly reduce the number of queries needed. Besides, we also propose a diversity criterion to avoid the sampling bias. Our extensive experimental results on MNIST and CIFAR10 show that the proposed method can reduce more than 90 of queries while preserve attacking success rates and obtain an accurate substitute model which is more than 85 similar with the target oracle.
CurriculumBased Neighborhood Sampling For Sequence Prediction ; The task of multistep ahead prediction in language models is challenging considering the discrepancy between training and testing. At test time, a language model is required to make predictions given past predictions as input, instead of the past targets that are provided during training. This difference, known as exposure bias, can lead to the compounding of errors along a generated sequence at test time. In order to improve generalization in neural language models and address compounding errors, we propose a curriculum learning based method that gradually changes an initially deterministic teacher policy to a gradually more stochastic policy, which we refer to as textitNearestNeighbor Replacement Sampling. A chosen input at a given timestep is replaced with a sampled nearest neighbor of the past target with a truncated probability proportional to the cosine similarity between the original word and its top k most similar words. This allows the teacher to explore alternatives when the teacher provides a suboptimal policy or when the initial policy is difficult for the learner to model. The proposed strategy is straightforward, online and requires little additional memory requirements. We report our main findings on two language modelling benchmarks and find that the proposed approach performs particularly well when used in conjunction with scheduled sampling, that too attempts to mitigate compounding errors in language models.
Gravitational waves from conformal symmetry breaking ; We consider the electroweak phase transition in the conformal extension of the standard model known as SU2cSM. Apart from the standard model particles, this model contains an additional scalar and gauge field that are both charged under the hidden SU2X. This model generically exhibits a very strong phase transition that proceeds after a large amount of supercooling. We estimate the gravitational wave spectrum produced in this model and show that its amplitude and frequency fall within the observational window of LISA. We also discuss potential pitfalls and relevant points of improvement required to attain reliable estimates of the gravitational wave production in this as well as in more general class of models. In order to improve perturbativity during the early stages of transition that ends with bubble nucleation, we solve a thermal gap equation in the scalar sector inspired by the 2PI effective action formalism.
Highperformance stock index trading making effective use of a deep LSTM neural network ; We present a deep long shortterm memory LSTMbased neural network for predicting asset prices, together with a successful trading strategy for generating profits based on the model's predictions. Our work is motivated by the fact that the effectiveness of any prediction model is inherently coupled to the trading strategy it is used with, and vise versa. This highlights the difficulty in developing models and strategies which are jointly optimal, but also points to avenues of investigation which are broader than prevailing approaches. Our LSTM model is structurally simple and generates predictions based on price observations over a modest number of past trading days. The model's architecture is tuned to promote profitability, as opposed to accuracy, under a strategy that does not trade simply based on whether the price is predicted to rise or fall, but rather takes advantage of the distribution of predicted returns, and the fact that a prediction's position within that distribution carries useful information about the expected profitability of a trade. The proposed model and trading strategy were tested on the SP 500, Dow Jones Industrial Average DJIA, NASDAQ and Russel 2000 stock indices, and achieved cumulative returns of 340, 185, 371 and 360, respectively, over 20102018, far outperforming the benchmark buyandhold strategy as well as other recent efforts.
Phase field modeling of quasistatic and dynamic crack propagation COMSOL implementation and case studies ; The phasefield model PFM represents the crack geometry in a diffusive way without introducing sharp discontinuities. This feature enables PFM to effectively model crack propagation compared with numerical methods based on discrete crack model, especially for complex crack patterns. Due to the involvement of phased field, phasefield method can be essentially treated a multifield problem even for pure mechanical problem. Therefore, it is supposed that the implementation of PFM based on a software developer that especially supports the solution of multifield problems should be more effective, simpler and more efficient than PFM implemented on a general finite element software. In this work, the authors aim to devise a simple and efficient implementation of phasefield model for the modelling of quasistatic and dynamic fracture in the general purpose commercial software developer, COMSOL Multiphysics. Notably only the tensile stress induced crack is accounted for crack evolution by using the decomposition of elastic strain energy. The width of the diffusive crack is controlled by a lengthscale parameter. Equations that govern body motion and phasefield evolution are written into different modules in COMSOL, which are then coupled to a whole system to be solved. A staggered scheme is adopted to solve the coupled system and each module is solved sequentially during one time step. A number of 2D and 3D examples are tested to investigate the performance of the present implementation. Our simulations show good agreement with previous works, indicating the feasibility and validity of the COMSOL implementation of PFM.
On the inexistence of solitons in EinsteinMaxwellscalar models ; Three nonexistence results are established for selfgravitating solitons in EinsteinMaxwellscalar models, wherein the scalar field is, generically, nonminimally coupled to the Maxwell field via a scalar function fPhi. Firstly, a trivial Maxwell field is considered, which yields a consistent truncation of the full model. In this case, using a scaling Derricktype argument, it is established that no stationary and axisymmetric selfgravitating scalar solitons exist, unless the scalar potential energy is somewhere negative in spacetime. This generalises previous results for the static and strictly stationary cases. Thus, rotation alone cannot support selfgravitating scalar solitons in this class of models. Secondly, constant sign couplings are considered. Generalising a previous argument by Heusler for electrovacuum, it is established that no static selfgravitating electromagneticscalar solitons exist. Thus, a varying but constant sign electric permittivity alone cannot support static EinsteinMaxwellscalar solitons. Finally, the second result is generalised for strictly stationary, but not necessarily static, spacetimes, using a Lichnerowicztype argument, generalising previous results in models where the scalar and Maxwell fields are not directly coupled. The scope of validity of each of these results points out the possible paths to circumvent them, in order to obtain selfgravitating solitons in EinsteinMaxwellscalar models.
Time Distribution for Persistent Viral Infection ; We study the early stages of viral infection, and the distribution of times to obtain a persistent infection. The virus population proliferates by entering and reproducing inside a target cell until a sufficient number of new virus particles are released via a burst, with a given burst size distribution, which results in the death of the infected cell. Starting with a 2D model describing the joint dynamics of the virus and infected cell populations, we analyze the corresponding master equation using the probability generating function formalism. Exploiting timescale separation between the virus and infected cell dynamics, the 2D model can be cast into an effective 1D model. To this end, we solve the 1D model analytically for a particular choice of burst size distribution. In the general case, we solve the model numerically by performing extensive MonteCarlo simulations, and demonstrate the equivalence between the 2D and 1D models by measuring the KullbackLeibler divergence between the corresponding distributions. Importantly, we find that the distribution of infection times is highly skewed with a fat exponential right tail. This indicates that there is nonnegligible portion of individuals with an infection time, significantly longer than the mean, which may have implications on when HIV tests should be performed.
Convergence of the Time Discrete Metamorphosis Model on Hadamard Manifolds ; Continuous image morphing is a classical task in image processing. The metamorphosis model proposed by Trouv'e, Younes and coworkers casts this problem in the frame of Riemannian geometry and geodesic paths between images. The associated metric in the space of images incorporates dissipation caused by a viscous flow transporting image intensities and its variations along motion paths. In many applications, images are maps from the image domain into a manifold e.g. in diffusion tensor imaging DTI the manifold of symmetric positive definite matrices with a suitable Riemannian metric. In this paper, we propose a generalized metamorphosis model for manifoldvalued images, where the range space is a finitedimensional Hadamard manifold. A corresponding time discrete version was presented by Neumayer et al. based on the general variational time discretization proposed by Berkels et al. Here, we prove the Moscoconvergence of the time discrete metamorphosis functional to the proposed manifoldvalued metamorphosis model, which implies the convergence of time discrete geodesic paths to a geodesic path in the time continuous metamorphosis model. In particular, the existence of geodesic paths is established. In fact, images as maps into Hadamard manifold are not only relevant in applications, but it is also shown that the joint convexity of the distance function which characterizes Hadamard manifolds is a crucial ingredient to establish existence of the metamorphosis model.
Quantum Generalized Linear Models ; Generalized linear models GLM are link function based statistical models. Many supervised learning algorithms are extensions of GLMs and have link functions built into the algorithm to model different outcome distributions. There are two major drawbacks when using this approach in applications using real world datasets. One is that none of the link functions available in the popular packages is a good fit for the data. Second, it is computationally inefficient and impractical to test all the possible distributions to find the optimum one. In addition, many GLMs and their machine learning extensions struggle on problems of overdispersion in Tweedie distributions. In this paper we propose a quantum extension to GLM that overcomes these drawbacks. A quantum gate with nonGaussian transformation can be used to continuously deform the outcome distribution from known results. In doing so, we eliminate the need for a link function. Further, by using an algorithm that superposes all possible distributions to collapse to fit a dataset, we optimize the model in a computationally efficient way. We provide an initial proofofconcept by testing this approach on both a simulation of overdispersed data and then on a benchmark dataset, which is quite overdispersed, and achieved state of the art results. This is a game changer in several applied fields, such as part failure modeling, medical research, actuarial science, finance and many other fields where Tweedie regression and overdispersion are ubiquitous.
A New Generation of Cool White Dwarf Atmosphere Models. IV. Revisiting the Spectral Evolution of Cool White Dwarfs ; As a result of competing physical mechanisms, the atmospheric composition of white dwarfs changes throughout their evolution, a process known as spectral evolution. Because of the ambiguity of their atmospheric compositions and the difficulties inherent to the modeling of their dense atmospheres, no consensus exists regarding the spectral evolution of cool white dwarfs Trm eff6000 K. In the previous papers of this series, we presented and observationally validated a new generation of cool white dwarf atmosphere models that include all the necessary constitutive physics to accurately model those objects. Using these new models and a homogeneous sample of 501 cool white dwarfs, we revisit the spectral evolution of cool white dwarfs. Our sample includes all spectroscopically identified white dwarfs cooler than 8300 K for which a parallax is available in Gaia DR2 and photometric observations are available in PanSTARRS1 and 2MASS. Except for a few cool carbonpolluted objects, our models allow an excellent fit to the spectroscopic and photometric observations of all objects included in our sample. We identify a decrease of the ratio of hydrogen to heliumrich objects between 7500 K and 6250 K, which we interpret as the signature of convective mixing. After this decrease, hydrogenrich objects become more abundant up to 5000 K. This puzzling increase, reminiscent of the nonDA gap, has yet to be explained. At lower temperatures, below 5000 K, hydrogenrich white dwarfs become rarer, which rules out the scenario according to which accretion of hydrogen from the interstellar medium dominates the spectral evolution of cool white dwarfs.
Learning to Groove with Inverse Sequence Transformations ; We explore models for translating abstract musical ideas scores, rhythms into expressive performances using Seq2Seq and recurrent Variational Information Bottleneck VIB models. Though Seq2Seq models usually require painstakingly aligned corpora, we show that it is possible to adapt an approach from the Generative Adversarial Network GAN literature e.g. Pix2Pix Isola et al., 2017 and Vid2Vid Wang et al. 2018a to sequences, creating large volumes of paired data by performing simple transformations and training generative models to plausibly invert these transformations. Music, and drumming in particular, provides a strong test case for this approach because many common transformations quantization, removing voices have clear semantics, and models for learning to invert them have realworld applications. Focusing on the case of drum set players, we create and release a new dataset for this purpose, containing over 13 hours of recordings by professional drummers aligned with finegrained timing and dynamics information. We also explore some of the creative potential of these models, including demonstrating improvements on stateoftheart methods for Humanization instantiating a performance from a musical score.
SpeakerIndependent SpeechDriven Visual Speech Synthesis using DomainAdapted Acoustic Models ; Speechdriven visual speech synthesis involves mapping features extracted from acoustic speech to the corresponding lip animation controls for a face model. This mapping can take many forms, but a powerful approach is to use deep neural networks DNNs. However, a limitation is the lack of synchronized audio, video, and depth data required to reliably train the DNNs, especially for speakerindependent models. In this paper, we investigate adapting an automatic speech recognition ASR acoustic model AM for the visual speech synthesis problem. We train the AM on ten thousand hours of audioonly data. The AM is then adapted to the visual speech synthesis domain using ninety hours of synchronized audiovisual speech. Using a subjective assessment test, we compared the performance of the AMinitialized DNN to one with a random initialization. The results show that viewers significantly prefer animations generated from the AMinitialized DNN than the ones generated using the randomly initialized model. We conclude that visual speech synthesis can significantly benefit from the powerful representation of speech in the ASR acoustic models.
ModelAgnostic Counterfactual Explanations for Consequential Decisions ; Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval. As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome. To this end, several works have proposed optimizationbased methods to generate nearest counterfactual explanations. However, these methods are often restricted to a particular subset of models e.g., decision trees or linear models and differentiable distance functions. In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function objective and predictive model constraints are represented as logic formulae. As shown by our experiments on realworld data, our algorithm is i modelagnostic nonlinear, nondifferentiable, nonconvex; ii datatypeagnostic heterogeneous features; iii distanceagnostic ell0, ell1, ellinfty, and combinations thereof; iv able to generate plausible and diverse counterfactuals for any sample i.e., 100 coverage; and v at provably optimal distances.
Validation of Approximate Likelihood and Emulator Models for Computationally Intensive Simulations ; Complex phenomena in engineering and the sciences are often modeled with computationally intensive feedforward simulations for which a tractable analytic likelihood does not exist. In these cases, it is sometimes necessary to estimate an approximate likelihood or fit a fast emulator model for efficient statistical inference; such surrogate models include Gaussian synthetic likelihoods and more recently neural density estimators such as autoregressive models and normalizing flows. To date, however, there is no consistent way of quantifying the quality of such a fit. Here we propose a statistical framework that can distinguish any arbitrary misspecified model from the target likelihood, and that in addition can identify with statistical confidence the regions of parameter as well as feature space where the fit is inadequate. Our validation method applies to settings where simulations are extremely costly and generated in batches or ensembles at fixed locations in parameter space. At the heart of our approach is a twosample test that quantifies the quality of the fit at fixed parameter values, and a global test that assesses goodnessoffit across simulation parameters. While our general framework can incorporate any test statistic or distance metric, we specifically argue for a new twosample test that can leverage any regression method to attain high power and provide diagnostics in complex data settings.
LambdaOpt Learn to Regularize Recommender Models in Finer Levels ; Recommendation models mainly deal with categorical variables, such as useritem ID and attributes. Besides the highcardinality issue, the interactions among such categorical variables are usually longtailed, with the head made up of highly frequent values and a long tail of rare ones. This phenomenon results in the data sparsity issue, making it essential to regularize the models to ensure generalization. The common practice is to employ grid search to manually tune regularization hyperparameters based on the validation data. However, it requires nontrivial efforts and large computation resources to search the whole candidate space; even so, it may not lead to the optimal choice, for which different parameters should have different regularization strengths. In this paper, we propose a hyperparameter optimization method, LambdaOpt, which automatically and adaptively enforces regularization during training. Specifically, it updates the regularization coefficients based on the performance of validation data. With LambdaOpt, the notorious tuning of regularization hyperparameters can be avoided; more importantly, it allows finegrained regularization i.e. each parameter can have an individualized regularization coefficient, leading to better generalized models. We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models. Extensive experiments on two public benchmarks demonstrate the superiority of our method in boosting the performance of topK recommendation.
KullbackLeibler DivergenceBased OutofDistribution Detection with FlowBased Generative Models ; Recent research has revealed that deep generative models including flowbased models and Variational Autoencoders may assign higher likelihoods to outofdistribution OOD data than indistribution ID data. However, we cannot sample OOD data from the model. This counterintuitive phenomenon has not been satisfactorily explained and brings obstacles to OOD detection with flowbased models. In this paper, we prove theorems to investigate the KullbackLeibler divergence in flowbased model and give two explanations for the above phenomenon. Based on our theoretical analysis, we propose a new method PADmethod to leverage KL divergence and local pixel dependence of representations to perform anomaly detection. Experimental results on prevalent benchmarks demonstrate the effectiveness and robustness of our method. For group anomaly detection, our method achieves 98.1 AUROC on average with a small batch size of 5. On the contrary, the baseline typicality testbased method only achieves 64.6 AUROC on average due to its failure on challenging problems. Our method also outperforms the stateoftheart method by 9.1 AUROC. For pointwise anomaly detection, our method achieves 90.7 AUROC on average and outperforms the baseline by 5.2 AUROC. Besides, our method has the least notable failures and is the most robust one.
A twofluid model for blackhole accretion flows Particle acceleration, outflows, and TeV emission ; The multiwavelength spectrum observed from M87 extends from radio wavelengths up to TeV gammaray energies. The radio through GeV components have been interpreted successfully using SSC models based on misaligned blazar jets, but the origin of the intense TeV emission detected during flares in 2004, 2005, and 2010 remains puzzling. It has been previously suggested that the TeV flares are produced when a relativistic proton jet originating in the core of M87 collides with a molecular cloud or stellar atmosphere located less than one parsec from the central black hole. We explore this scenario in detail here using a selfconsistent model for the acceleration of relativistic protons in a shocked, twofluid ADAF accretion disc. The relativistic protons accelerated in the disc escape to power the observed jet outflows. The distribution function for the jet protons is used to compute the TeV emission produced when the jet collides with a cloud or stellar atmosphere. The simulated broadband radiation spectrum includes radio, Xray, and GeV components generated via synchrotron, as well as TeV emission generated via the production and decay of muons, positrons, and electrons. The selfconsistency of the model is verified by computing the relativistic particle pressure using the distribution function, and comparing it with the relativistic particle pressure obtained from the hydrodynamical model. We demonstrate that the model is able to reproduce the multiwavelength spectrum from M87 observed by VERITAS and HESS during the highenergy flares in 2004, 2005, and 2010.
A validated energy model of a solar dishStirling system considering the cleanliness of mirrors ; Solar systems based on the coupling of parabolic concentrating collectors and thermal engines i.e. dishStirling systems are among the most efficient generators of solar power currently available. This study focuses on the modelling of functioning data from a 32 kWe dishStirling solar plant installed at a facility test site on the University of Palermo campus, in Southern Italy. The proposed model, based on real monitored data, the energy balance of the collector and the partial load efficiency of the Stirling engine, can be used easily to simulate the annual energy production of such systems, making use of the solar radiation database, with the aim of encouraging a greater commercialisation of this technology. Introducing further simplifying assumptions based on our experimental data, the model can be linearised providing a new analytical expression of the parameters that characterise the widely used Stine empirical model. The model was calibrated against data corresponding to the collector with clean mirrors and used to predict the net electric production of the dishStirling accurately. A numerical method for assessing the daily level of mirror soiling without the use of direct reflectivity measures was also defined. The proposed methodology was used to evaluate the history of mirror soiling for the observation period, which shows a strong correlation with the recorded sequence of rains and dust depositions. The results of this study emphasise how desert dust transport events, frequent occurrences in parts of the Mediterranean, can have a dramatic impact on the electric power generation of dishStirling plants.
JordanWigner Dualities for TranslationInvariant Hamiltonians in Any Dimension Emergent Fermions in Fracton Topological Order ; Inspired by recent developments generalizing JordanWigner dualities to higher dimensions, we develop a framework of such dualities using an algebraic formalism for translationinvariant Hamiltonians proposed by Haah. We prove that given a translationinvariant fermionic system with general qbody interactions, where q is even, a local mapping preserving global fermion parity to a dual Pauli spin model exists and is unique up to a choice of basis. Furthermore, the dual spin model is constructive, and we present various examples of these dualities. As an application, we bosonize fermionic systems where freefermion hopping terms are absent q ge 4 and fermion parity is conserved on submanifolds such as higherform, line, planar or fractal symmetry. For some cases in 31D, bosonizing such a system can give rise to fracton models where the emergent particles are immobile but yet can behave in certain ways like fermions. These models may be examples of new nonrelativistic 't Hooft anomalies. Furthermore, fermionic subsystem symmetries are also present in various Majorana stabilizer codes, such as the color code or the checkerboard model, and we give examples where their duals are cluster states or new fracton models distinct from their doubled CSS codes.
Fast and Threerious Speeding Up Weak Supervision with Triplet Methods ; Weak supervision is a popular method for building machine learning models without relying on ground truth annotations. Instead, it generates probabilistic training labels by estimating the accuracies of multiple noisy labeling sources e.g., heuristics, crowd workers. Existing approaches use latent variable estimation to model the noisy sources, but these methods can be computationally expensive, scaling superlinearly in the data. In this work, we show that, for a class of latent variable models highly applicable to weak supervision, we can find a closedform solution to model parameters, obviating the need for iterative solutions like stochastic gradient descent SGD. We use this insight to build FlyingSquid, a weak supervision framework that runs orders of magnitude faster than previous weak supervision approaches and requires fewer assumptions. In particular, we prove bounds on generalization error without assuming that the latent variable model can exactly parameterize the underlying data distribution. Empirically, we validate FlyingSquid on benchmark weak supervision datasets and find that it achieves the same or higher quality compared to previous approaches without the need to tune an SGD procedure, recovers model parameters 170 times faster on average, and enables new video analysis and online learning applications.
A Big Data Enabled Channel Model for 5G Wireless Communication Systems ; The standardization process of the fifth generation 5G wireless communications has recently been accelerated and the first commercial 5G services would be provided as early as in 2018. The increasing of enormous smartphones, new complex scenarios, large frequency bands, massive antenna elements, and dense small cells will generate big datasets and bring 5G communications to the era of big data. This paper investigates various applications of big data analytics, especially machine learning algorithms in wireless communications and channel modeling. We propose a big data and machine learning enabled wireless channel model framework. The proposed channel model is based on artificial neural networks ANNs, including feedforward neural network FNN and radial basis function neural network RBFNN. The input parameters are transmitter Tx and receiver Rx coordinates, TxRx distance, and carrier frequency, while the output parameters are channel statistical properties, including the received power, root mean square RMS delay spread DS, and RMS angle spreads ASs. Datasets used to train and test the ANNs are collected from both real channel measurements and a geometry based stochastic model GBSM. Simulation results show good performance and indicate that machine learning algorithms can be powerful analytical tools for future measurementbased wireless channel modeling.
Optimal Value of Information in Graphical Models ; Many realworld decision making tasks require us to choose among several expensive observations. In a sensor network, for example, it is important to select the subset of sensors that is expected to provide the strongest reduction in uncertainty. In medical decision making tasks, one needs to select which tests to administer before deciding on the most effective treatment. It has been general practice to use heuristicguided procedures for selecting observations. In this paper, we present the first efficient optimal algorithms for selecting observations for a class of probabilistic graphical models. For example, our algorithms allow to optimally label hidden variables in Hidden Markov Models HMMs. We provide results for both selecting the optimal subset of observations, and for obtaining an optimal conditional observation plan. Furthermore we prove a surprising result In most graphical models tasks, if one designs an efficient algorithm for chain graphs, such as HMMs, this procedure can be generalized to polytree graphical models. We prove that the optimizing value of information is NPPPhard even for polytrees. It also follows from our results that just computing decision theoretic value of information objective functions, which are commonly used in practice, is a Pcomplete problem even on Naive Bayes models a simple special case of polytrees. In addition, we consider several extensions, such as using our algorithms for scheduling observation selection for multiple sensors. We demonstrate the effectiveness of our approach on several realworld datasets, including a prototype sensor network deployment for energy conservation in buildings.
Modeling the stylized facts of wholesale system marginal price SMP and the impacts of regulatory reforms on the Greek Electricity Market ; This work presents the results of an empirical research with the target of modeling the stylized facts of the daily expost System Marginal Price SMP of the Greek wholesale electricity market, using data from January 2004 to December of 2011. SMP is considered here as the footprint of an underline stochastic and nonlinear process that bears all the information reflecting not only the effects of changes in endogenous or fundamental factors of the market but also the impacts of a series of regulatory reforms that have continuously changed the market's microstructure. To capture the dynamics of the conditional mean and volatility of SMP that generate the stylized factsmean reversion, price spikes, fat tails price distribution etc, a number of ARMAX GARCH models have been estimated using as regressors an extensive set of fundamental factors in the Greek electricity market as well as dummy variables that mimic the history of Regulator's interventions. The findings show that changes in the microstructure of the market caused by the reforms have strongly affected the dynamic evolution of SMP and that the best found model captures adequately the stylized facts of the series that other electricity and financial markets share. The dynamics of the conditional volatility generated by the model can be extremely useful in the efforts that are under way towards market restructuring so the Greek market to be more compatible with the requirements of the European Target Model.
A contact model for sticking of adhesive mesoparticles ; The interaction between viscoelastoplastic and adhesive particles is the subject of this study, where mesoparticles are introduced, i.e., simplified particles, whose contact mechanics is not taken into account in all details. A few examples of mesoparticles include agglomerates or groups of primary particles, or inhomogeneous particles with microstructures of the scale of the contact deformation, such as coreshell materials. A simple, flexible contact model for mesoparticles is proposed, which allows to model the bulk behavior of assemblies of many particles in both rapid and slow, quasistatic flow situations. An attempt is made to categorize existing contact models for the normal force, discuss all the essential mechanical ingredients that must enter the model qualitatively and finally solve it analytically. The model combines a shortranged, noncontact part resembling either dry or wet materials with an elaborate, viscoelastoplastic and adhesive contact law. Using energy conservation arguments, an analytical expression for the coefficient of restitution is derived in terms of the impact velocity for pair interactions or, equivalently, without loss of generality, for quasistatic situations in terms of the maximum overlap or confining stress. Adhesive particles or mesoparticles stick to each other at very low impact velocity, while they rebound less dissipatively with increasing velocity, in agreement with previous studies. For even higher impact velocities an interesting second sticking and rebound regime is reported. The low velocity sticking is due to noncontact adhesive forces, the first rebound regime is due to stronger elastic and kinetic energies with little dissipation, while the high velocity sticking is generated by the nonlinearly increasing, history dependent plastic dissipation and adhesive contact force.
Thermodynamics of the Variable Modified Chaplygin gas ; A cosmological model with a new variant of Chaplygin gas obeying an equation of stateEoS, P Arho fracBrhoalpha where B B0an is investigated in the context of its thermodynamical behaviour. Here B0 and n are constants and a is the scale factor. We show that the equation of state of this Variable Modified Chaplygin gas' VMCG can describe the current accelerated expansion of the universe. Following standard thermodynamical criteria we mainly discuss the classical thermodynamical stability of the model and find that the new parameter, n introduced in VMCG plays a crucial role in determining the stability considerations and should always be emphnegative. We further observe that although the earlier model of Lu explains many of the current observational findings of different probes it fails the desirable tests of thermodynamical stability. We also note that for n 0 our model points to a phantom type of expansion which, however, is found to be compatible with current SNe Ia observations and CMB anisotropy measurements. Further the third law of thermodynamics is obeyed in our case. Our model is very general in the sense that many of earlier works in this field may be obtained as a special case of our solution. An interesting point to note is that the model also apparently suggests a smooth transition from the big bang to the big rip in its whole evaluation process.
ThreePhase Dynamic Simulation of Power Systems Using Combined Transmission and Distribution System Models ; This paper presents a new method for studying electromechanical transients in power systems using three phase, combined transmission and distribution models hybrid models. The methodology models individual phases of an electric network and associated unbalance in load and generation. Therefore, the impacts of load unbalance, single phase distributed generation and line impedance unbalance on electromechanical transients can be studied without using electromagnetic transient simulation EMTP programs. The implementation of this methodology in software is called the Three Phase Dynamics Analyzer TPDA. Case studies included in the paper demonstrate the accuracy of TPDA and its ability to simulate electromechanical transients in hybrid models. TPDA has the potential for providing electric utilities and power system planners with more accurate assessment of system stability than traditional dynamic simulation software that assume balanced network topology.
Singular prior distributions and illconditioning in Bayesian Doptimal design for several nonlinear models ; For Bayesian Doptimal design, we define a singular prior distribution for the model parameters as a prior distribution such that the determinant of the Fisher information matrix has a prior geometric mean of zero for all designs. For such a prior distribution, the Bayesian Doptimality criterion fails to select a design. For the exponential decay model, we characterize singularity of the prior distribution in terms of the expectations of a few elementary transformations of the parameter. For a compartmental model and several multiparameter generalized linear models, we establish sufficient conditions for singularity of a prior distribution. For the generalized linear models we also obtain sufficient conditions for nonsingularity. In the existing literature, weakly informative prior distributions are commonly recommended as a default choice for inference in logistic regression. Here it is shown that some of the recommended prior distributions are singular, and hence should not be used for Bayesian Doptimal design. Additionally, methods are developed to derive and assess Bayesian Defficient designs when numerical evaluation of the objective function fails due to illconditioning, as often occurs for heavytailed prior distributions. These numerical methods are illustrated for logistic regression.
DotProduct Join An ArrayRelation Join Operator for Big Model Analytics ; Big Model analytics tackles the training of massive models that go beyond the available memory of a single computing device, e.g., CPU or GPU. It generalizes Big Data analytics which is targeted at how to train memoryresident models over outofmemory training data. In this paper, we propose an indatabase solution for Big Model analytics. We identify dotproduct as the primary operation for training generalized linear models and introduce the first arrayrelation dotproduct join database operator between a set of sparse arrays and a dense relation. This is a constrained formulation of the extensively studied sparse matrix vector multiplication SpMV kernel. The paramount challenge in designing the dotproduct join operator is how to optimally schedule access to the dense relation based on the noncontiguous entries in the sparse arrays. We prove that this problem is NPhard and propose a practical solution characterized by two technical contributionsdynamic batch processing and array reordering. We devise three heuristics LSH, Radix, and Kcenter for array reordering and analyze them thoroughly. We execute extensive experiments over synthetic and real data that confirm the minimal overhead the operator incurs when sufficient memory is available and the graceful degradation it suffers as memory becomes scarce. Moreover, dotproduct join achieves an order of magnitude reduction in execution time over alternative indatabase solutions.
CDSFA Stochastic Frontier Analysis Approach to Revenue Modeling in Large Cloud Data Centers ; Enterprises are investing heavily in cloud data centers to meet the ever surging business demand. Data Center is a facility, which houses computer systems and associated components, such as telecommunications and storage systems. It generally includes power supply equipment, communication connections and cooling equipment. A large data center can use as much electricity as a small town. Due to the emergence of data center based computing services, it has become necessary to examine how the costs associated with data centers evolve over time, mainly in view of efficiency issues. We have presented a quasi form of Cobb Douglas model, which addresses revenue and profit issues in running large data centers. The stochastic form has been introduced and explored along with the quasi Cobb Douglas model to understand the behavior of the model in depth. Harrod neutrality and Solow neutrality are incorporated in the model to identify the technological progress in cloud data centers. This allows us to shed light on the stochastic uncertainty of cloud data center operations. A general approach to optimizing the revenue cost of data centers using Cobb Douglas Stochastic Frontier Analysis,CDSFA is presented. Next, we develop the optimization model for large data centers. The mathematical basis of CDSFA has been utilized for cost optimization and profit maximization in data centers. The results are found to be quite useful in view of production reorganization in large data centers around the world.
Integrable Floquet dynamics ; We discuss several classes of integrable Floquet systems, i.e. systems which do not exhibit chaotic behavior even under a time dependent perturbation. The first class is associated with finitedimensional Lie groups and infinitedimensional generalization thereof. The second class is related to the row transfer matrices of the 2D statistical mechanics models. The third class of models, called here boost models, is constructed as a periodic interchange of two Hamiltonians one is the integrable lattice model Hamiltonian, while the second is the boost operator. The latter for known cases coincides with the entanglement Hamiltonian and is closely related to the corner transfer matrix of the corresponding 2D statistical models. We present several explicit examples. As an interesting application of the boost models we discuss a possibility of generating periodically oscillating states with the period different from that of the driving field. In particular, one can realize an oscillating state by performing a static quench to a boost operator. We term this state a Quantum Boost Clock. All analyzed setups can be readily realized experimentally, for example in cod atoms.
EndtoEnd Dense Video Captioning with Masked Transformer ; Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two subproblems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an endtoend transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a selfattention mechanism, which enables the use of efficient nonrecurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this endtoend model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.
Cosmological constraints on an exponential interaction in the dark sector ; Cosmological models where dark matter DM and dark energy DE interact with each other are the general scenarios in compared to the noninteracting models. The interaction is usually motivated from the phenomenological ground and thus there is no such rule to prefer a particular interaction between DM and DE. Being motivated, in this work, allowing an exponential interaction between DM and DE in a spatially flat homogeneous and isotropic universe, we explore the dynamics of the universe through the constraints of the free parameters where the strength of the interaction is characterized by the dimensionless coupling parameter xi and the equation of state EoS for DE, wx, is supposed to be a constant. The interaction scenario is fitted using the latest available observational data. Our analyses report that the observational data permit a nonzero value of xi but it is very small and consistent with xi 0. From the constraints on wx, we find that both phantom wx 1 and quintessence wx 1 regimes are equally allowed but wx is very close to 1'. The overall results indicate that at the background level, the interaction model cannot be distinguished from the base Lambda cold dark matter model while from the perturbative analyses, the interaction model mildly deviates from the base model. We highlight that, even if we allow DM and DE to interact in an exponential manner, but according to the observational data, the evidence for a nonzero coupling is very small.
Robust Markov Decision Process Beyond Rectangularity ; We consider a robust approach to address uncertainty in model parameters in Markov Decision Processes MDPs, which are widely used to model dynamic optimization in many applications. Most prior works consider the case where the uncertainty on transitions related to different states is uncoupled and the adversary is allowed to select the worst possible realization for each state unrelated to others, potentially leading to highly conservative solutions. On the other hand, the case of general uncertainty sets is known to be intractable. We consider a factor model for probability transitions where the transition probability is a linear function of a factor matrix that is uncertain and belongs to a factor matrix uncertainty set. This is a fairly general model of uncertainty in probability transitions, allowing the decision maker to model dependence between probability transitions across different states and it is significantly less conservative than prior approaches. We show that under an underlying rectangularity assumption, we can efficiently compute an optimal robust policy under the factor matrix uncertainty model. Furthermore, we show that there is an optimal robust policy that is deterministic, which is of interest from an interpretability standpoint. We also introduce the robust counterpart of important structural results of classical MDPs, including the maximum principle and Blackwell optimality, and we provide a computational study to demonstrate the effectiveness of our approach in mitigating the conservativeness of robust policies.
A twostage stochastic approach for the asset protection problem during escaped wildfires with uncertain timing of a wind change ; Wildfires are natural disasters capable of damaging economies and communities. When wildfires become uncontrollable, Incident Manager Teams IMT's dispatch response vehicles to key assets to undertake protective tasks and so mitigate the risk to these assets. In developing a deployment plan under severe time pressure, IMT's need to consider the special requirements of each asset, the resources vehicles and their teams, as well as uncertainties associated with the wildfire. A common situation that arises in southern Australian wildfires is a wind change. There is a reliable forecast of a wind change, but some uncertainty around the timing of that change. To assist IMT's to deal with this situation we develop a twostage stochastic model to integrate such an uncertainty with the complexities of asset protection operations. This is the first time a mathematical model is proposed which considers uncertainty in the timing of a scenario change. The model is implemented for a case study that uses the context of the 2009 Black Saturday bushfires in Victoria. A new set of benchmark instances is generated using realistic wildfire attributes to test the computational tractability of our model and the results compared to a dynamic rerouting approach. The computations reveal that, compared with dynamic rerouting, the new model can generate better deployment plans. The model can achieve solutions in operational time for realisticsized problems, although for larger problems the suboptimal rerouting algorithm would still need to be deployed.
Typeface Completion with Generative Adversarial Networks ; The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network TCN which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for imagetoimage translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous imagetoimage translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed stateoftheart models for imagetoimage translation. The source code of our model is available at httpsgithub.comyongqyuTCN.
Realtime simulation of largescale HTS systems multiscale and homogeneous models using TA formulation ; The emergence of secondgeneration high temperature superconducting tapes has favored the development of largescale superconductor systems. The mathematical models capable of estimating electromagnetic quantities in superconductors have evolved from simple analytical models to complex numerical models. The available analytical models are limited to the analysis of single wires or infinite arrays that, in general, do not represent real devices in real applications. The numerical models based on finite element method using the H formulation of the Maxwells equations are useful for the analysis of mediumsize systems, but their application in largescale systems is problematic due to the excessive computational cost in terms of memory and computation time. Then it is necessary to devise new strategies to make the computation more efficient. The homogenization and the multiscale methods have successfully simplified the description of the systems allowing the study of largescale systems. Also, efficient calculations have been achieved using the TA formulation. In the present work, we propose a series of adaptations to the multiscale and homogenization methods so that they can be efficiently used in conjunction with the TA formulation to compute the distribution of current density and hysteresis losses in the superconducting layer of superconducting tapes. The computation time and the amount of memory are substantially reduced up to a point that it is possible to achieve realtime simulations of HTS largescale systems under slow ramping cycles of practical importance on personal computers.
Penalized estimation of flexible hidden Markov models for time series of counts ; Hidden Markov models are versatile tools for modeling sequential observations, where it is assumed that a hidden state process selects which of finitely many distributions generates any given observation. Specifically for time series of counts, the Poisson family often provides a natural choice for the statedependent distributions, though more flexible distributions such as the negative binomial or distributions with a bounded range can also be used. However, in practice, choosing an adequate class of parametric distributions is often anything but straightforward, and an inadequate choice can have severe negative consequences on the model's predictive performance, on state classification, and generally on inference related to the system considered. To address this issue, we propose an effectively nonparametric approach to fitting hidden Markov models to time series of counts, where the statedependent distributions are estimated in a completely datadriven way without the need to select a distributional family. To avoid overfitting, we add a roughness penalty based on higherorder differences between adjacent count probabilities to the likelihood, which is demonstrated to produce smooth probability mass functions of the statedependent distributions. The feasibility of the suggested approach is assessed in a simulation experiment, and illustrated in two realdata applications, where we model the distribution of i major earthquake counts and ii acceleration counts of an oceanic whitetip shark Carcharhinus longimanus over time.
Estimating Buildings' Parameters over Time Including Prior Knowledge ; Modeling buildings' heat dynamics is a complex process which depends on various factors including weather, building thermal capacity, insulation preservation, and residents' behavior. Graybox models offer a causal inference of those dynamics expressed in few parameters specific to built environments. These parameters can provide compelling insights into the characteristics of building artifacts and have various applications such as forecasting HVAC usage, indoor temperature control monitoring of built environments, etc. In this paper, we present a systematic study of modeling buildings' thermal characteristics and thus derive the parameters of built conditions with a Bayesian approach. We build a Bayesian statespace model that can adapt and incorporate buildings' thermal equations and propose a generalized solution that can easily adapt prior knowledge regarding the parameters. We show that a faster approximate approach using variational inference for parameter estimation can provide similar parameters as that of a more timeconsuming Markov Chain Monte Carlo MCMC approach. We perform extensive evaluations on two datasets to understand the generative process and show that the Bayesian approach is more interpretable. We further study the effects of prior selection for the model parameters and transfer learning, where we learn parameters from one season and use them to fit the model in the other. We perform extensive evaluations on controlled and real data traces to enumerate buildings' parameter within a 95 credible interval.
CEM03.03 and LAQGSM03.03 Event Generators for the MCNP6, MCNPX, and MARS15 Transport Codes ; A description of the IntraNuclear Cascade INC, preequilibrium, evaporation, fission, coalescence, and Fermi breakup models used by the latest versions of our CEM03.03 and LAQGSM03.03 event generators is presented, with a focus on our most recent developments of these models. The recently developed S and G versions of our codes, that consider multifragmentation of nuclei formed after the preequilibrium stage of reactions when their excitation energy is above 2A MeV using the Statistical Multifragmentation Model SMM code by Botvina et al. S stands for SMM and the fissionlike binarydecay model GEMINI by Charity G stands for GEMINI, respectively, are briefly described as well. Examples of benchmarking our models against a large variety of experimental data on particleparticle, particlenucleus, and nucleusnucleus reactions are presented. Open questions on reaction mechanisms and future necessary work are outlined.
A Classical Analogue to the Standard Model, Chapter 3 Standard Model particle spectrum from scalar fields on mathbbCwedge 18 ; The mathbbCwedge 2mathfrakn models are analogue models which generate Lagrangians for quasiparticles on mathbbR1,3 from antisymmetric vector products on Grassmann manifolds. This paper introduces mathbbCwedge 18, the smallest member of this series which is capable of hosting a quasiparticle spectrum analogous to the Standard Model. Once all gaugeable degrees of freedom have been fixed, the particle spectrum of mathbbCwedge 18 is seen to resemble the Standard Model plus two additional weakly interacting bosons and a ninth gluon.
Deducing the three gauge interactions from the three Reidemeister moves ; Possibly the first argument for the origin of the three observed gauge groups and thus for the origin of the three nongravitational interactions is presented. The argument is based on a proposal for the final theory that models nature at Planck scales as a collection of featureless strands that fluctuate in three dimensions. This approach models vacuum as untangled strands, particles as tangles of strands, and Planck units as crossing switches. Modelling vacuum as untangled strands implies the field equations of general relativity, when applying an argument from 1995 to the thermodynamics of such strands. Modelling fermions as tangles of two or more strands allows to define wave functions as timeaverages of oriented strand crossing density. Using an argument from 1980, this allows to deduce the Dirac equation. When fermions are modelled as tangled strands, gauge interactions appear naturally as deformation of tangle cores. The three possible types of observable core deformations are given by the three Reidemeister moves. They naturally lead to a U1, a broken and parityviolating SU2, and a SU3 gauge group. The corresponding Lagrangians also appear naturally. The strand model is unique, is unmodifiable, is consistent with all known data, and makes numerous testable predictions, including the absence of other interactions, of grand unification, of supersymmetry and of higher dimensions. A method for calculating coupling constants seems to appear naturally.
Temperature Structure and Atmospheric Circulation of Dry, Tidally Locked Rocky Exoplanets ; Nextgeneration space telescopes will observe the atmospheres of rocky planets orbiting nearby Mdwarfs. Understanding these observations will require welldeveloped theory in addition to numerical simulations. Here we present theoretical models for the temperature structure and atmospheric circulation of dry, tidally locked rocky exoplanets with grey radiative transfer and test them using a general circulation model GCM. First, we develop a radiativeconvective model that captures surface temperatures of slowly rotating and cool atmospheres. Second, we show that the atmospheric circulation acts as a global heat engine, which places strong constraints on largescale wind speeds. Third, we develop a radiativeconvectivesubsiding model which extends our radiativeconvective model to hot and thin atmospheres. We find that rocky planets develop large daynight temperature gradients at a ratio of wavetoradiative timescales up to two orders of magnitude smaller than the value suggested by work on hot Jupiters. The small ratio is due to the heat engine inefficiency and asymmetry between updrafts and subsidence in convecting atmospheres. Fourth, we show using GCM simulations that rotation only has a strong effect on temperature structure if the atmosphere is hot or thin. Our models let us map out atmospheric scenarios for planets such as GJ 1132b and show how thermal phase curves could constrain them. Measuring phase curves of shortperiod planets will require similar amounts of time on the James Webb Space Telescope as detecting molecules via transit spectroscopy, so future observations should pursue both techniques.
Latent Tree Models for Hierarchical Topic Detection ; We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models HLTMs. The variables at the bottom level of an HLTM are observed binary variables that represent the presenceabsence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word cooccurrence patterns and those at higher levels representing cooccurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture longrange word cooccurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture shortrange word cooccurrence patterns and give thematically more specific topics. Unlike LDAbased topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies.