text
stringlengths
62
2.94k
An evaluation of pretrained models for feature extraction in image classification ; In recent years, we have witnessed a considerable increase in performance in image classification tasks. This performance improvement is mainly due to the adoption of deep learning techniques. Generally, deep learning techniques demand a large set of annotated data, making it a challenge when applying it to small datasets. In this scenario, transfer learning strategies have become a promising alternative to overcome these issues. This work aims to compare the performance of different pretrained neural networks for feature extraction in image classification tasks. We evaluated 16 different pretrained models in four image datasets. Our results demonstrate that the best general performance along the datasets was achieved by CLIPViTB and ViTH14, where the CLIPResNet50 model had similar performance but with less variability. Therefore, our study provides evidence supporting the choice of models for feature extraction in image classification tasks.
Prompting Audios Using Acoustic Properties For Emotion Representation ; Emotions lie on a continuum, but current models treat emotions as a finite valued discrete variable. This representation does not capture the diversity in the expression of emotion. To better represent emotions we propose the use of natural language descriptions or prompts. In this work, we address the challenge of automatically generating these prompts and training a model to better learn emotion representations from audio and prompt pairs. We use acoustic properties that are correlated to emotion like pitch, intensity, speech rate, and articulation rate to automatically generate prompts i.e. 'acoustic prompts'. We use a contrastive learning objective to map speech to their respective acoustic prompts. We evaluate our model on Emotion Audio Retrieval and Speech Emotion Recognition. Our results show that the acoustic prompts significantly improve the model's performance in EAR, in various PrecisionK metrics. In SER, we observe a 3.8 relative accuracy improvement on the Ravdess dataset.
Incorporating Target Vehicle Trajectories Predicted by Deep Learning Into Model Predictive Controlled Vehicles ; Model Predictive Control MPC has been widely applied to the motion planning of autonomous vehicles. An MPCcontrolled vehicle is required to predict its own trajectories in a finite prediction horizon according to its model. Beyond this, the vehicle should also incorporate the prediction of the trajectory of its nearby vehicles, or target vehicles TVs into its decisionmaking. The conventional trajectory prediction methods, such as the constantspeedbased ones, are too trivial to accurately capture the potential collision risks. In this report, we propose a novel MPCbased motion planning method for an autonomous vehicle with a set of riskaware constraints. These constraints incorporate the predicted trajectory of a TV learned using a deeplearningbased method. A recurrent neural network RNN is used to predict the TV's future trajectory based on its historical data. Then, the predicted TV trajectory is incorporated into the optimization of the MPC of the ego vehicle to generate collisionfree motion. Simulation studies are conducted to showcase the prediction accuracy of the RNN model and the collisionfree trajectories generated by the MPC.
Cosmological Surprises from Braneworld models of Dark Energy ; Properties of Braneworld models of dark energy are reviewed. Braneworld models admit the following interesting possibilities i The effective equation of state can be w 1 as well as w 1. In the former case the expansion of the universe is well behaved at all times and the universe does not run into a future Big Rip' singularity which is usually encountered by Phantom models. ii For a class of Braneworld models the acceleration of the universe can be a transient phenomenon. In this case the current acceleration of the universe is sandwiched between two matter dominated epochs. Such a braneworld does not have a horizon in contrast to LCDM and most Quintessence models. iii For a specific set of parameter values the universe can either originate from, or end its existence in a Quiescent singularity, at which the density, pressure and Hubble parameter remain finite, while the deceleration parameter and all invariants of the Riemann tensor diverge to infinity within a finite interval of cosmic time. iv Braneworld models of dark energy can loiter at high redshifts 6 lleq z lleq 40. The Hubble parameter decreasesduring the loitering epoch relative to its value in LCDM. As a result the age of the universe at loitering dramatically increases and this is expected to boost the formation of high redshift gravitationally bound systems such as 109 Modot black holes at z sim 6 and lowermass black holes andor Population III stars at z 10, whose existence could be problematic within the LCDM scenario. v Braneworld models with a timelike extra dimension bounce at early times thereby avoiding the initial Big Bang singularity'. vi Both Inflation and Dark Energy can be successfully unified within a single scheme Quintessential Inflation.
A classification of spherically symmetric selfsimilar dust models ; We classify all spherically symmetric dust solutions of Einstein's equations which are selfsimilar in the sense that all dimensionless variables depend only upon zequiv rt. We show that the equations can be reduced to a special case of the general perfect fluid models with equation of state palpha mu. The most general dust solution can be written down explicitly and is described by two parameters. The first one E corresponds to the asymptotic energy at large z, while the second one D specifies the value of z at the singularity which characterizes such models. The ED0 solution is just the flat Friedmann model. The 1parameter family of solutions with z0 and D0 are inhomogeneous cosmological models which expand from a Big Bang singularity at t0 and are asymptotically Friedmann at large z; models with E0 are everywhere underdense relative to Friedmann and expand forever, while those with E0 are everywhere overdense and recollapse to a black hole containing another singularity. The black hole always has an apparent horizon but need not have an event horizon. The D0 solutions with z0 are just the time reverse of the z0 ones. The 2parameter solutions with D0 again represent inhomogeneous cosmological models but the Big Bang singularity is at z1D, the Big Crunch singularity is at z1D, and any particular solution necessarily spans both z0 and z0. While there is no static model in the dust case, all these solutions are asymptotically quasistatic'' at large z. As in the D0 case, the ones with E ge 0 expand or contract monotonically but the latter may now contain a naked singularity. The ones with E0 expand from or recollapse to a second singularity, the latter containing a black hole.
Exactness of Belief Propagation for Some Graphical Models with Loops ; It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative Belief Propagation BP algorithm convergent to unique minimum of the socalled Bethe free energy functional. For a general graphical model on a loopy graph the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal MaximumLikelihood ML solution in the zerotemperature limit. However, there are exceptions to this general rule, discussed in cite05KW and cite08BSS in two different contexts, where zerotemperature version of the BP algorithm finds ML solution for special models on graphs with loops. These two models share a key feature their ML solutions can be found by an efficient Linear Programming LP algorithm with a TotallyUniModular TUM matrix of constraints. Generalizing the two models we consider a class of graphical models reducible in the zero temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, gBP, funding the global minimum of the Bethe free energy is available we show that in the limit of zero temperature gBP outputs the ML solution. Our consideration is based on equivalence established between gapless Linear Programming LP relaxation of the graphical model in the Tto 0 limit and respective LP version of the BetheFree energy minimization.
Viscous dissipative Chaplygin gas dominated homogenous and isotropic cosmological models ; The generalized Chaplygin gas, which interpolates between a high density relativistic era and a nonrelativistic matter phase, is a popular dark energy candidate. We consider a generalization of the Chaplygin gas model, by assuming the presence of a bulk viscous type dissipative term in the effective thermodynamic pressure of the gas. The dissipative effects are described by using the truncated IsraelStewart model, with the bulk viscosity coefficient and the relaxation time functions of the energy density only. The corresponding cosmological dynamics of the bulk viscous Chaplygin gas dominated universe is considered in detail for a flat homogeneous isotropic FriedmannRobertsonWalker geometry. For different values of the model parameters we consider the evolution of the cosmological parameters scale factor, energy density, Hubble function, deceleration parameter and luminosity distance, respectively, by using both analytical and numerical methods. In the large time limit the model describes an accelerating universe, with the effective negative pressure induced by the Chaplygin gas and the bulk viscous pressure driving the acceleration. The theoretical predictions of the luminosity distance of our model are compared with the observations of the type Ia supernovae. The model fits well the recent supernova data. From the fitting we determine both the equation of state of the Chaplygin gas, and the parameters characterizing the bulk viscosity. The evolution of the scalar field associated to the viscous Chaplygin fluid is also considered, and the corresponding potential is obtained. Hence the viscous Chaplygin gas model offers an effective dynamical possibility for replacing the cosmological constant, and to explain the recent acceleration of the universe.
Systemic Risk in a Unifying Framework for Cascading Processes on Networks ; We introduce a general framework for models of cascade and contagion processes on networks, to identify their commonalities and differences. In particular, models of social and financial cascades, as well as the fiber bundle model, the voter model, and models of epidemic spreading are recovered as special cases. To unify their description, we define the net fragility of a node, which is the difference between its fragility and the threshold that determines its failure. Nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes, thus possibly triggering a cascade. In this framework, we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour. At the microscopic level, we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade, depending on its position in the network and its degree. At the macroscopic level, systemic risk is measured as the final fraction of failed nodes, Xast, and for each of the three classes we derive a recursive equation to compute its value. The phase diagram of Xast as a function of the initial conditions, thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes. We could identify which model class lead to a firstorder phase transition in systemic risk, i.e. situations where small changes in the initial conditions may lead to a global failure. Eventually, we generalize our framework to encompass stochastic contagion models. This indicates the potential for further generalizations.
Complete NLO QCD Corrections for Tree Level Delta F 2 FCNC Processes ; Anticipating the important role of tree level FCNC processes in the indirect search for new physics at distance scales as short as 10191021 m, we present complete NLO QCD corrections to tree level Delta F2 processes mediated by heavy colourless gauge bosons and scalars. Such contributions can be present at the fundamental level when GIM mechanism is absent as in numerous Z' models, gauged flavour models with new heavy neutral gauge bosons and LeftRight symmetric models with heavy neutral scalars. They can also be generated at one loop in models having GIM at the fundamental level and MFV of which TwoHiggs Doublet models with and without SUSY are the best known examples. In models containing vectorial heavy fermions that mix with the standard chiral quarks and models in which Z and SM neutral Higgs H mix with new heavy gauge bosons and scalars also treelevel Z and SM neutral Higgs contributions to Delta F2 processes are possible. In all these extensions new local operators are generated having Wilson coefficients that are generally much stronger affected by RG QCD effects than it is the case of the SM operators. The new aspect of our work is the calculation of Oalphas corrections to matching conditions for the Wilson coefficients of the contributing operators in the NDRMS scheme that can be used in all models listed above. This allows to reduce certain unphysical scale and renormalization scheme dependences in the existing NLO calculations. We show explicitly how our results can be combined with the analytic formulae for the Pia QCD factors that include both hadronic matrix elements of contributing operators and RG evolution from high energy to low energy scales. For the masses of heavy gauge bosons and scalars O1TeV the remaining unphysical scale dependences for the mixing amplitudes are reduced typically from 1025, depending on the operator considered, down to 12.
Dark energy cosmology the equivalent description via different theoretical models and cosmography tests ; We review different dark energy cosmologies. In particular, we present the LambdaCDM cosmology, Little Rip and PseudoRip universes, the phantom and quintessence cosmologies with Type I, II, III and IV finitetime future singularities and nonsingular dark energy universes. In the first part, we explain the LambdaCDM model and wellestablished observational tests which constrain the current cosmic acceleration. After that, we investigate the dark fluid universe where a fluid has quite general equation of state EoS including inhomogeneous or imperfect EoS. All the above dark energy cosmologies for different fluids are explicitly realized, and their properties are also explored. It is shown that all the above dark energy universes may mimic the LambdaCDM model currently, consistent with the recent observational data. Furthermore, special attention is paid to the equivalence of different dark energy models. We consider single and multiple scalar field theories, tachyon scalar theory and holographic dark energy as models for current acceleration with the features of quintessencephantom cosmology, and demonstrate their equivalence to the corresponding fluid descriptions. In the second part, we study another equivalent class of dark energy models which includes FR gravity as well as FR HovravaLifshitz gravity and the teleparallel fT gravity. The cosmology of such models representing the LambdaCDMlike universe or the accelerating expansion with the quintessencephantom nature is described. Finally, we approach the problem of testing dark energy and alternative gravity models to general relativity by cosmography. We show that degeneration among parameters can be removed by accurate data analysis of large data samples and also present the examples.
Topological model for machining of parts with complex shapes ; Complex shapes are widely used to design products in several industries such as aeronautics, automotive and domestic appliances. Several variations of their curvatures and orientations generate difficulties during their manufacturing or the machining of dies used in moulding, injection and forging. Analysis of several parts highlights two levels of difficulties between three types of shapes prismatic parts with simple geometrical shapes, aeronautic structure parts composed of several shallow pockets and forging dies composed of several deep cavities which often contain protrusions. This paper mainly concerns High Speed Machining HSM of these dies which represent the highest complexity level because of the shapes' geometry and their topology. Five axes HSM is generally required for such complex shaped parts but 3 axes machining can be sufficient for dies. Evolutions in HSM CAM software and machine tools lead to an important increase in time for machining preparation. Analysis stages of the CAD model particularly induce this time increase which is required for a wise choice of cutting tools and machining strategies. Assistance modules for prismatic parts machining features identification in CAD models are widely implemented in CAM software. In spite of the last CAM evolutions, these kinds of CAM modules are undeveloped for aeronautical structure parts and forging dies. Development of new CAM modules for the extraction of relevant machining areas as well as the definition of the topological relations between these areas must make it possible for the machining assistant to reduce the machining preparation time. In this paper, a model developed for the description of complex shape parts topology is presented. It is based on machining areas extracted for the construction of geometrical features starting from CAD models of the parts. As topology is described in order to assist machining assistant during machining process generation, the difficulties associated with tasks he carried out are analyzed at first. The topological model presented after is based on the basic geometrical features extracted. Topological relations which represent the framework of the model are defined between the basic geometrical features which are gathered afterwards in macrofeatures. Approach used for the identification of these macrofeatures is also presented in this paper. Detailed application on the construction of the topological model of forging dies is presented in the last part of the paper.
Diversity and noise effects in a model of homeostatic regulation of the sleepwake cycle ; Recent advances in sleep neurobiology have allowed development of physiologically based mathematical models of sleep regulation that account for the neuronal dynamics responsible for the regulation of sleepwake cycles and allow detailed examination of the underlying mechanisms. Neuronal systems in general, and those involved in sleep regulation in particular, are noisy and heterogeneous by their nature. It has been shown in various systems that certain levels of noise and diversity can significantly improve signal encoding. However, these phenomena, especially the effects of diversity, are rarely considered in the models of sleep regulation. The present paper is focused on a neuronbased physiologically motivated model of sleepwake cycles that proposes a novel mechanism of the homeostatic regulation of sleep based on the dynamics of a wakepromoting neuropeptide orexin. Here this model is generalized by the introduction of intrinsic diversity and noise in the orexinproducing neurons in order to study the effect of their presence on the sleepwake cycle. A quantitative measure of the quality of a sleepwake cycle is introduced and used to systematically study the generalized model for different levels of noise and diversity. The model is shown to exhibit a clear diversityinduced resonance that is, the best wakesleep cycle turns out to correspond to an intermediate level of diversity at the synapses of the orexinproducing neurons. On the other hand only a mild evidence of stochastic resonance is found when the level of noise is varied. These results show that disorder, especially in the form of quenched diversity, can be a keyelement for an efficient or optimal functioning of the homeostatic regulation of the sleepwake cycle. Furthermore, this study provides an example of constructive role of diversity in a neuronal system that can be extended beyond the system studied here.
Effects of shocks in stellar atmosphere models on the emission line spectrum of surrounding Hii regions ; Emission line studies from Hii regions in galaxies require tools for the inversion of line ratios into desired physical properties. These tools generally come in form of diagnostic ratiosdiagrams that are based on grids of photoionisation models. An important input to the photoionisation models is the stellar atmosphere spectrum of the ionising sources. The current omission of shocks in the calculation of the former set of models could threaten the accuracy of the physical interpretation of emission line ratios from Hii regions. Current stellar atmosphere models that are crucial inputs to the grid of photoionisation models used to generate nebular emission line diagnostic diagrams might produce significant biases due to the omission of shocks. We investigate whether a new generation of photoionisation model grids, taking shocks into account, is required to compensate for the biases. We make use of the WMBasic stellar atmosphere code, which can account for the extra energetic emission in the stellar spectral energy distribution produced by shocks, in conjunction with the photoionisation code MOCASSIN to determine whether shocks produce significant biases in the determination of the physical parameters of the interstellar medium andor ionising stellar parameters. We conclude that these effects are only important for stellar sources with effective temperatures lower than 30kK and in this case they yield artificially high stellar temperatures, electron temperatures and nebular ionisation parameters. The magnitude of the effect is also obviously dependent on the strength of the shock and is likely to be unimportant for the majority of stellar sources. Nevertheless, we find our 20kK and 30kK shock models to strongly enhance the He ii 4686 nebular emission line. This result is however not strong enough to explain previously observed He ii 4686 line emission in the spectra of Hii galaxies.
Model selection and hypothesis testing for largescale network models with overlapping groups ; The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their largescale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more welldefined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models, and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties as opposed to group properties are often an essential ingredient of network formation.
GPTIPS 2 an opensource software platform for symbolic data mining ; GPTIPS is a free, open source MATLAB based software platform for symbolic data mining SDM. It uses a multigene variant of the biologically inspired machine learning method of genetic programming MGGP as the engine that drives the automatic model discovery process. Symbolic data mining is the process of extracting hidden, meaningful relationships from data in the form of symbolic equations. In contrast to other datamining methods, the structural transparency of the generated predictive equations can give new insights into the physical systems or processes that generated the data. Furthermore, this transparency makes the models very easy to deploy outside of MATLAB. The rationale behind GPTIPS is to reduce the technical barriers to using, understanding, visualising and deploying GP based symbolic models of data, whilst at the same time remaining highly customisable and delivering robust numerical performance for power users. In this chapter, notable new features of the latest version of the software are discussed with these aims in mind. Additionally, a simplified variant of the MGGP high level gene crossover mechanism is proposed. It is demonstrated that the new functionality of GPTIPS 2 a facilitates the discovery of compact symbolic relationships from data using multiple approaches, e.g. using novel genecentric visualisation analysis to mitigate horizontal bloat and reduce complexity in multigene symbolic regression models b provides numerous methods for visualising the properties of symbolic models c emphasises the generation of graphically navigable libraries of models that are optimal in terms of the Pareto trade off surface of model performance and complexity and d expedites real world applications by the simple, rapid and robust deployment of symbolic models outside the software environment they were developed in.
Detectability of bigravity with graviton oscillations using gravitational wave observations ; The gravitational waveforms in the ghostfree bigravity theory exhibit deviations from those in general relativity. The main difference is caused by graviton oscillations in the bigravity theory. We investigate the prospects for the detection of the corrections to gravitational waveforms from coalescing compact binaries due to graviton oscillations and for constraining bigravity parameters with the gravitational wave observations. We consider the bigravity model discussed by the De FeliceNakamuraTanaka subset of the bigravity model, and the phenomenological model in which the bigravity parameters are treated as independent variables. In both models, the bigravity waveform shows strong amplitude modulation, and there can be a characteristic frequency of the largest peak of the amplitude, which depends on the bigravity parameters. We show that there is a detectable region of the bigravity parameters for the advanced groundbased laser interferometers, such as Advanced LIGO, Advanced Virgo, and KAGRA. This region corresponds to the effective graviton mass of mu geq 1017rm cm1 for tildec1 geq 1019 in the phenomenological model, while mu geq 1016.5rm cm1 for kappaxic2 geq 100.5 in the De FeliceNakamuraTanaka subset of the bigravity model, respectively, where tildec is the propagation speed of the massive graviton and kappaxic2 corresponds to the corrections to the gravitational constant in general relativity. These regions are not excluded by existing solar system tests. We also show that, in the case of 1.41.4Mrm sun binaries at the distance of 200rm Mpc, logmu2 is determined with an accuracy of cal O0.1 at the 1sigma level for a fiducial model with mu21033rm cm2 in the case of the phenomenological model.
The Generic Critical Behaviour for 2D Polymer Collapse ; The nature of the theta point for a polymer in two dimensions has long been debated, with a variety of candidates put forward for the critical exponents. This includes those derived by Duplantier and Saleur DS for an exactly solvable model. We use a representation of the problem via the CPN1 sigma model in the limit N rightarrow 1 to determine the stability of this critical point. First we prove that the DS critical exponents are robust, so long as the polymer does not cross itself they can arise in a generic lattice model, and do not require fine tuning. This resolves a longstanding theoretical question. However there is an apparent paradox two different lattice models, apparently both in the DS universality class, show different numbers of relevant perturbations, apparently leading to contradictory conclusions about the stability of the DS exponents. We explain this in terms of subtle differences between the two models, one of which is finetuned and not strictly in the DS universality class. Next, we allow the polymer to cross itself, as appropriate e.g. to the quasi2D case. This introduces an additional independent relevant perturbation, so we do not expect the DS exponents to apply. The exponents in the case with crossings will be those of the generic tricritical On model at n0, and different to the case without crossings. We also discuss interesting features of the operator content of the CPN1 model. Simple geometrical arguments show that two operators in this field theory, with very different symmetry properties, have the same scaling dimension for any value of N equivalently, any value of the loop fugacity. Also we argue that for any value of N the CPN1 model has a marginal parityodd operator which is related to the loops' winding angle.
Theoretical Accuracy in Cosmological Growth Estimation ; We elucidate the importance of the consistent treatment of gravitymodel specific nonlinearities when estimating the growth of cosmological structures from redshift space distortions RSD. Within the context of standard perturbation theory SPT, we compare the predictions of two theoretical templates with redshift space data from COLA COmoving Lagrangian Acceleration simulations in the normal branch of DGP gravity nDGP and General Relativity GR. Using COLA for these comparisons is validated using a suite of full Nbody simulations for the same theories. The two theoretical templates correspond to the standard general relativistic perturbation equations and those same equations modelled within nDGP. Gravitational clustering nonlinear effects are accounted for by modelling the power spectrum up to one loop order and redshift space clustering anisotropy is modelled using the Taruya, Nishimichi and Saito TNS RSD model. Using this approach, we attempt to recover the simulation's fiducial logarithmic growth parameter f. By assigning the simulation data with errors representing an idealised survey with a volume of 10mboxGpc3h3, we find the GR template is unable to recover fiducial f to within 1sigma at z1 when we match the data up to krm max0.195hMpc. On the other hand, the DGP template recovers the fiducial value within 1sigma. Further, we conduct the same analysis for sets of mock data generated for generalised models of modified gravity using SPT, where again we analyse the GR template's ability to recover the fiducial value. We find that for models with enhanced gravitational nonlinearity, the theoretical bias of the GR template becomes significant for stage IV surveys. Thus, we show that for the future large data volume galaxy surveys, the selfconsistent modelling of nonGR gravity scenarios will be crucial in constraining theory parameters.
Subgridscale modeling for microbubble generation amid colliding water surfaces ; The generation of microbubbles upon the collision and interaction of liquid bodies in a gaseous environment is a ubiquitous process in twophase flows, including largescale phenomena like ship wakes, breaking waves and rain showers. These collision and interaction events involve the relative approach of pairs of liquidgas interfaces. As these interfaces approach, the smallest length scales of the system are dynamically reduced. This evolving disparity in length scales is numerically challenging to resolve without the employment of subgridscale SGS impact and breakup models. In this study, a physicsbased impact and breakup model for the generation of these microbubbles is developed and implemented. The objectives of this study are to develop a computational algorithm that identifies interface collision events that contribute to the formation of microbubbles, to formulate a physicsbased breakup model that predicts the distribution of microbubble sizes using the characteristics of the originating gas film, and to integrate these modules into a twophase flow solver that accurately captures the effects of bubbles of all sizes. In these proceedings, an SGS model suitable for the aforementioned problems is proposed, and the steps involved in implementing the proposed SGS model in a macroscale flow solver are outlined. Two aspects of the development of this SGS model are then discussed in detail. First, the formulation and implementation of the first step of the SGS model, the collision detection algorithm, is detailed. Second, preliminary findings of a numerical investigation intended to shed light on breakup processes in turbulent twophase flows are presented.
Nonlocal gravity. Conceptual aspects and cosmological predictions ; Even if the fundamental action of gravity is local, the corresponding quantum effective action, that includes the effect of quantum fluctuations, is a nonlocal object. These nonlocalities are well understood in the ultraviolet regime but much less in the infrared, where they could in principle give rise to important cosmological effects. Here we systematize and extend previous work of our group, in which it is assumed that a mass scale Lambda is dynamically generated in the infrared, giving rise to nonlocal terms in the quantum effective action of gravity. We give a detailed discussion of conceptual aspects related to nonlocal gravity and of the cosmological consequences of these models. The requirement of providing a viable cosmological evolution severely restricts the form of the nonlocal terms, and selects a model the socalled RR model that corresponds to a dynamical mass generation for the conformal mode. For such a model 1 there is a FRW background evolution, where the nonlocal term acts as an effective dark energy with a phantom equation of state, providing accelerated expansion without a cosmological constant. 2 Cosmological perturbations are well behaved. 3 Implementing the model in a Boltzmann code and comparing with observations we find that the RR model fits the CMB, BAO, SNe, structure formation data and local H0 measurements at a level statistically equivalent to LambdaCDM. 4 Bayesian parameter estimation shows that the value of H0 obtained in the RR model is higher than in LambdaCDM, reducing to 2.0sigma the tension with the value from local measurements. 5 The RR model provides a prediction for the sum of neutrino masses that falls within the limits set by oscillation and terrestrial experiments. 6 Gravitational waves propagate at the speed of light, complying with the limit from GW170817GRB 170817A.
On a Generic Security Game Model ; To protect the systems exposed to the Internet against attacks, a security system with the capability to engage with the attacker is needed. There have been attempts to model the engagementinteractions between users, both benign and malicious, and network administrators as games. Building on such works, we present a game model which is generic enough to capture various modes of such interactions. The model facilitates stochastic games with imperfect information. The information is imperfect due to erroneous sensors leading to incorrect perception of the current state by the players. To model this error in perception distributed over other multiple states, we use Euclidean distances between the outputs of the sensors. We build a 5state game to represent the interaction of the administrator with the user. The states correspond to 1 the user being out of the system in the Internet, and after logging in to the system; 2 having low privileges; 3 having high privileges; 4 when he successfully attacks and 5 gets trapped in a honeypot by the administrator. Each state has its own action set. We present the game with a distinct perceived action set corresponding to each distinct information set of these states. The model facilitates stochastic games with imperfect information. The imperfect information is due to erroneous sensors leading to incorrect perception of the current state by the players. To model this error in perception distributed over the states, we use Euclidean distances between outputs of the sensors. A numerical simulation of an example game is presented to show the evaluation of rewards to the players and the preferred strategies. We also present the conditions for formulating the strategies when dealing with more than one attacker and making collaborations.
On the properties of the asymptotic incompatibility measure in multiparameter quantum estimation ; We address the use of asymptotic incompatibility AI to assess the quantumness of a multiparameter quantum statistical model. AI is a recently introduced measure which quantifies the difference between the Holevo and the SLD scalar bounds, and can be evaluated using only the symmetric logarithmic derivative SLD operators of the model. At first, we evaluate analytically the AI of the most general quantum statistical models involving twolevel qubit and singlemode Gaussian continuousvariable quantum systems, and prove that AI is a simple monotonous function of the state purity. Then, we numerically investigate the same problem for qudits ddimensional quantum systems, with 2 d leq 4, showing that, while in general AI is not in general a function of purity, we have enough numerical evidence to conclude that the maximum amount of AI is attainable only for quantum statistical models characterized by a purity larger than musf min 1d1. In addition, by parametrizing qudit states as thermal Gibbs states, numerical results suggest that, once the spectrum of the Hamiltonian is fixed, the AI measure is in onetoone correspondence with the fictitious temperature parameter beta characterizing the family of density operators. Finally, by studying in detail the definition and properties of the AI measure we find that i given a quantum statistical model, one can readily identify the maximum number of asymptotically compatibile parameters; ii the AI of a quantum statistical model bounds from above the AI of any submodel that can be defined by fixing one or more of the original unknown parameters or functions thereof, leading to possibly useful bounds on the AI of models involving noisy quantum dynamics.
Endowing mathbf with a dynamic nature constraints in a spatially curved Universe ; In this study, we consider three dark energy models in which Lambda is not constant, but has a dynamic nature that depends on the Hubble parameter H andor its time derivative dotH. We analyze the generalized running vacuum model, for which LambdaHABH2CdotH, along with the two models obtained by setting B or C equal to zero. A null value for C yields the classical running vacuum model RVM, while B0 corresponds to what we term the generalized running vacuum subcase, or GRVS. Our main aim is to investigate whether these models can accommodate nonzero spatial curvature. To this end, we carry out a Markov Chain Monte Carlo analysis using data for the observables associated with TypeIa supernovae, cosmic chronometers, the cosmic microwave background and baryon acoustic oscillations, as well as two values for the Hubble constant. Then we include data relating to the growth of largescale structure LSS and repeat the procedure. Our results indicate that taking LSS observations into account helps to tighten constraints and determine a definite sign for the model parameters. In the case of the RVM and GRVS, the addition of growth data results in dynamical vacuum energy being preferred to a cosmological constant at a little over 1sigma. This happens in both the flat and nonflat scenarios there are only a few exceptions but comes at the cost of an extra parameter which can degrade the performance of the models as assessed by model selection criteria. Of special relevance is the fact that the inclusion of LSS data appears to increase compatibility with a flat geometry. It also brings the constraints on the Hubble constant closer to the range of values established by emphPlanck.
A Bayesian Model for Bivariate Causal Inference ; We address the problem of twovariable causal inference without intervention. This task is to infer an existing causal relation between two random variables, i.e. X rightarrow Y or Y rightarrow X , from purely observational data. As the option to modify a potential cause is not given in many situations only structural properties of the data can be used to solve this illposed problem. We briefly review a number of stateoftheart methods for this, including very recent ones. A novel inference method is introduced, Bayesian Causal Inference BCI, which assumes a generative Bayesian hierarchical model to pursue the strategy of Bayesian model selection. In the adopted model the distribution of the cause variable is given by a Poisson lognormal distribution, which allows to explicitly regard the discrete nature of datasets, correlations in the parameter spaces, as well as the variance of probability densities on logarithmic scales. We assume Fourier diagonal Field covariance operators. The model itself is restricted to use cases where a direct causal relation X rightarrow Y has to be decided against a relation Y rightarrow X , therefore we compare it other methods for this exact problem setting. The generative model assumed provides synthetic causal data for benchmarking our model in comparison to existing Stateoftheart models, namely LiNGAM , ANMHSIC , ANMMML , IGCI and CGNN . We explore how well the above methods perform in case of high noise settings, strongly discretized data and very sparse data. BCI performs generally reliable with synthetic data as well as with the real world TCEP benchmark set, with an accuracy comparable to stateoftheart algorithms. We discuss directions for the future development of BCI .
The effect of a dynamogenerated field on the Parker wind ; Stellar winds are an integral part of the underlying dynamo, the motor of stellar activity. The wind controls the star's angular momentum loss, which depends on the magnetic field geometry which varies significantly in time and latitude. Here we study basic properties of a selfconsistent model that includes simple representations of both the global stellar dynamo in a spherical shell and the exterior in which the wind accelerates and becomes supersonic. We numerically solve an axisymmetric meanfield model for the induction, momentum, and continuity equations using an isothermal equation of state. The model allows for the simultaneous generation of a mean magnetic field and the development of a Parker wind. The resulting flow is transonic at the critical point, which we arrange to be between the inner and outer radii of the model. The boundary conditions are assumed to be such that the magnetic field is antisymmetric about the equator, i.e., dipolar. At the solar rotation rate, the dynamo is oscillatory and of alpha2 type. In most of the domain, the magnetic field corresponds to that of a split monopole. The magnetic energy flux is largest between the stellar surface and the critical point. The angular momentum flux is highly variable in time and can reach negative values, especially at midlatitudes. At rapid rotation of up to 50 times the solar value, most of the magnetic field is lost along the axis within the inner tangential cylinder of the model. The model reveals unexpected features that are not generally anticipated from models that are designed to reproduce the solar wind highly variable angular momentum fluxes even from just an alpha2 dynamo in the star. A major caveat of our isothermal models with a magnetic field produced by a dynamo is the difficulty to reach small enough plasma betas without the dynamo itself becoming unrealistically strong inside the star.
Object Allocation Over a Network of Objects Mobile Agents with Strict Preferences ; In recent work, Gourves, Lesca, and Wilczynski propose a variant of the classic housing markets model where the matching between agents and objects evolves through Paretoimproving swaps between pairs of adjacent agents in a social network. To explore the swap dynamics of their model, they pose several basic questions concerning the set of reachable matchings. In their work and other followup works, these questions have been studied for various classes of graphs stars, paths, generalized stars i.e., trees where at most one vertex has degree greater than two, trees, and cliques. For generalized stars and trees, it remains open whether a Paretoefficient reachable matching can be found in polynomial time. In this paper, we pursue the same set of questions under a natural variant of their model. In our model, the social network is replaced by a network of objects, and a swap is allowed to take place between two agents if it is Paretoimproving and the associated objects are adjacent in the network. In those cases where the question of polynomialtime solvability versus NPhardness has been resolved for the social network model, we are able to show that the same result holds for the networkofobjects model. In addition, for our model, we present a polynomialtime algorithm for computing a Paretoefficient reachable matching in generalized star networks. Moreover, the object reachability algorithm that we present for path networks is significantly faster than the known polynomialtime algorithms for the same question in the social network model.
Efficient Modelling of Trivializing Maps for Lattice 4 Theory Using Normalizing Flows A First Look at Scalability ; Generalpurpose Markov Chain Monte Carlo sampling algorithms suffer from a dramatic reduction in efficiency as the system being studied is driven towards a critical point. Recently, a series of seminal studies suggested that normalizing flows a class of deep generative models can form the basis of a sampling strategy that does not suffer from this 'critical slowing down'. The central idea is to use machine learning techniques to build approximate trivializing maps, i.e. field transformations that map the theory of interest into a 'simpler' theory in which the degrees of freedom decouple, and where the statistical weight in the path integral is given by a distribution from which sampling is easy. No separate process is required to generate training data for such models, and convergence to the desired distribution is guaranteed through a reweighting procedure such as a Metropolis test. In a proofofprinciple demonstration on twodimensional phi4 theory, Albergo et al. arXiv1904.12072 modelled the trivializing map as a sequence of pointwise affine transformations. We pick up this thread, with the aim of quantifying how well we can expect this approach to scale as we increase the number of degrees of freedom in the system. We make several modifications to the original design that allow our models learn more efficient representations of trivializing maps using much smaller neural networks, which leads to a large reduction in the computational cost required to train models of equivalent quality. After making these changes, we find that sampling efficiency is almost entirely dictated by how extensively a model has been trained, while being unresponsive to further alterations that increase model flexibility. However, as we move towards the continuum limit the training costs scale extremely quickly, which urgently requires further work to fully understand and mitigate.
Harnessing Unlabeled Data to Improve Generalization of Biometric Gender and Age Classifiers ; With significant advances in deep learning, many computer vision applications have reached the inflection point. However, these deep learning models need large amount of labeled data for model training and optimum parameter estimation. Limited labeled data for model training results in overfitting and impacts their generalization performance. However, the collection and annotation of large amount of data is a very time consuming and expensive operation. Further, due to privacy and security concerns, the large amount of labeled data could not be collected for certain applications such as those involving medical field. Selftraining, Cotraining, and Selfensemble methods are three types of semisupervised learning methods that can be used to exploit unlabeled data. In this paper, we propose selfensemble based deep learning model that along with limited labeled data, harness unlabeled data for improving the generalization performance. We evaluated the proposed selfensemble based deeplearning model for softbiometric gender and age classification. Experimental evaluation on CelebA and VISOB datasets suggest gender classification accuracy of 94.46 and 81.00, respectively, using only 1000 labeled samples and remaining 199k samples as unlabeled samples for CelebA dataset and similarly,1000 labeled samples with remaining 107k samples as unlabeled samples for VISOB dataset. Comparative evaluation suggest that there is 5.74 and 8.47 improvement in the accuracy of the selfensemble model when compared with supervised model trained on the entire CelebA and VISOB dataset, respectively. We also evaluated the proposed learning method for agegroup prediction on Adience dataset and it outperformed the baseline supervised deeplearning learning model with a better exact accuracy of 55.55 pm 4.28 which is 3.92 more than the baseline.
Learnability of the output distributions of local quantum circuits ; There is currently a large interest in understanding the potential advantages quantum devices can offer for probabilistic modelling. In this work we investigate, within two different oracle models, the probably approximately correct PAC learnability of quantum circuit Born machines, i.e., the output distributions of local quantum circuits. We first show a negative result, namely, that the output distributions of superlogarithmic depth Clifford circuits are not sampleefficiently learnable in the statistical query model, i.e., when given query access to empirical expectation values of bounded functions over the sample space. This immediately implies the hardness, for both quantum and classical algorithms, of learning from statistical queries the output distributions of local quantum circuits using any gate set which includes the Clifford group. As many practical generative modelling algorithms use statistical queries including those for training quantum circuit Born machines our result is broadly applicable and strongly limits the possibility of a meaningful quantum advantage for learning the output distributions of local quantum circuits. As a positive result, we show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable by a classical learner. Our results are equally applicable to the problems of learning an algorithm for generating samples from the target distribution generative modelling and learning an algorithm for evaluating its probabilities density modelling. They provide the first rigorous insights into the learnability of output distributions of local quantum circuits from the probabilistic modelling perspective.
Multitask Prompted Training Enables ZeroShot Task Generalization ; Large language models have recently been shown to attain reasonable zeroshot generalization on a diverse set of tasks Brown et al., 2020. It has been hypothesized that this is a consequence of implicit multitask learning in language models' pretraining Radford et al., 2019. Can zeroshot generalization instead be directly induced by explicit multitask learning To test this question at scale, we develop a system for easily mapping any natural language tasks into a humanreadable prompted form. We convert a large set of supervised datasets, each with multiple prompts with diverse wording. These prompted datasets allow for benchmarking the ability of a model to perform completely heldout tasks. We finetune a pretrained encoderdecoder model Raffel et al., 2020; Lester et al., 2021 on this multitask mixture covering a wide variety of tasks. The model attains strong zeroshot performance on several standard datasets, often outperforming models up to 16x its size. Further, our approach attains strong performance on a subset of tasks from the BIGbench benchmark, outperforming models up to 6x its size. All trained models are available at httpsgithub.combigscienceworkshoptzero and all prompts are available at httpsgithub.combigscienceworkshoppromptsource.
Effects of Mixed Distribution Statistical Flood Frequency Models on Dam Safety Assessments A Case Study of the Pueblo Dam, USA ; Statistical flood frequency analysis coupled with hydrograph scaling is commonly used to generate design floods to assess dam safety assessment. The safety assessments can be highly sensitive to the choice of the statistical flood frequency model. Standard dam safety assessments are typically based on a single distribution model of flood frequency, often the Log Pearson Type III or Generalized Extreme Value distributions. Floods, however, may result from multiple physical processes such as rain on snow, snowmelt or rainstorms. This can result in a mixed distribution of annual peak flows, according to the cause of each flood. Engineering design choices based on a single distribution statistical model are vulnerable to the effects of this potential structural model error. To explore the practicality and potential value of implementing mixed distribution statistical models in engineering design, we compare the goodness of fit of several single and mixeddistribution peak flow models, as well as the contingent dam safety assessment at Pueblo, Colorado as a didactic example. Summer snowmelt and intense summer rainstorms are both key drivers of annual peak flow at Pueblo. We analyze the potential implications for the annual probability of overtoppinginduced failure of the Pueblo Dam as a didactic example. We address the temporal and physical cause separation problems by building on previous work with mixed distributions. We find a Mixed Generalized Extreme Value distribution model best fits peak flows observed in the gaged record, historical floods, and paleo floods at Pueblo. Finally, we show that accounting for mixed distributions in the safety assessment at Pueblo Dam increases the assessed risk of overtopping.
Fewshot Learning with Multilingual Language Models ; Largescale generative language models such as GPT3 are competitive fewshot learners. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their crosslingual generalization. In this work, we train multilingual generative language models on a corpus covering a diverse set of languages, and study their few and zeroshot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in fewshot learning in more than 20 representative languages, outperforming GPT3 of comparable size in multilingual commonsense reasoning with 7.4 absolute accuracy improvement in 0shot settings and 9.4 in 4shot settings and natural language inference 5.4 in each of 0shot and 4shot settings. On the FLORES101 machine translation benchmark, our model outperforms GPT3 on 171 out of 182 directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We conduct an indepth analysis of different multilingual prompting approaches, showing in particular that strong fewshot learning performance across languages can be achieved via crosslingual transfer through both templates and demonstration examples. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT3 models.
Observational constraints on nonlinear matter extensions of general relativity ; We present a phenomenological analysis of current observational constraints on classes of FLRW cosmological models in which the matter side of Einstein's equations includes, in addition to the canonical term, a term proportional to some function of the energymomentum tensor T2TalphabetaTalphabetarho23p2, or of its trace Trho3p. Qualitatively, one may think of these models as extensions of general relativity with a nonlinear matter Lagrangian. As such they are somewhat different from the usual dynamical dark energy or modified gravity models in the former class of models one adds further dynamical degrees of freedom to the Lagrangian often in the form of scalar fields, while in the latter the gravitational part of the Lagrangian is changed. We study both of these models under two different scenarios 1 as phenomenological twoparameter or threeparameter extensions of the standard LambdaCDM, in which case the model still has a cosmological constant but the nonlinear matter Lagrangian leads to additional terms in Einstein's equations, which cosmological observations tightly constrain, and 2 as alternatives to LambdaCDM, where there is no cosmological constant, and the nonlinear matter term would have to provide the acceleration which would be somewhat closer in spirit to the usual modified gravity models. A comparative analysis of the observational constraints obtained in the various cases provides some insight on the level of robustness of the Lambda model and on the parameter space still available for phenomenological alternatives.
Multipolar gravitational waveforms for spinning binary black holes and their impact on source characterization ; In the last five years, gravitationalwave astronomy has gone from a purely theoretical field into a thriving experimental science. Many gravitationalwave signals, emitted by stellarmass binary black holes and binary neutron stars, have been detected, and many more are expected in the future. The observation of the gravitationalwave signals from these systems, and the characterization of their sources, heavily relies on the precise models for the emitted gravitational waveforms. In this thesis, I present an updated version of the waveform models for spinning binary black holes within the effectiveonebody formalism. The novelty of the waveform models presented in this work is the inclusion of beyondquadupolar terms in the waveforms emitted by spinning binary black holes. I first construct the model in the simplified case of black holes with spins aligned with the orbital angular momentum of the binary, then I extend it to the case of generic spin orientations. The measurement of the source properties of a binary system emitting gravitational waves requires to compute mathcalO107109 different waveforms. Since the waveform models mentioned before can require mathcalO110s to generate a single waveform, they can be difficult to use in dataanalysis studies. To overcome this obstacle, I use the reducedordermodeling technique to develop a faster version of the waveform model for black holes with spins aligned to the orbital angular momentum of the binary. The waveform models developed in this thesis have been used by the LIGO and Virgo collaborations for the inference of the source parameters of the gravitationalwave signals detected during the second and third observing runs O2 and O3 of the LIGO and Virgo detectors. Here, I present a study on the source properties of the signals GW170729 and GW190412, for which I have been directly involved in the analysis.
Using DeepSpeed and Megatron to Train MegatronTuring NLG 530B, A LargeScale Generative Language Model ; Pretrained generalpurpose language models can achieve stateoftheart accuracies in various natural language processing domains by adapting to downstream tasks via zeroshot, fewshot and finetuning techniques. Because of their success, the size of these models has increased rapidly, requiring highperformance hardware, software, and algorithmic techniques to enable training such large models. As the result of a joint effort between Microsoft and NVIDIA, we present details on the training of the largest monolithic transformer based language model, MegatronTuring NLG 530B MTNLG, with 530 billion parameters. In this paper, we first focus on the infrastructure as well as the 3D parallelism methodology used to train this model using DeepSpeed and Megatron. Next, we detail the training process, the design of our training corpus, and our data curation techniques, which we believe is a key ingredient to the success of the model. Finally, we discuss various evaluation results, as well as other interesting observations and new properties exhibited by MTNLG. We demonstrate that MTNLG achieves superior zero, one, and fewshot learning accuracies on several NLP benchmarks and establishes new stateoftheart results. We believe that our contributions will help further the development of largescale training infrastructures, largescale language models, and natural language generations.
Neural Models for OutputSpace Invariance in Combinatorial Problems ; Recently many neural models have been proposed to solve combinatorial puzzles by implicitly learning underlying constraints using their solved instances, such as sudoku or graph coloring GCP. One drawback of the proposed architectures, which are often based on Graph Neural Networks GNN, is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or boardsize in sudoku. We call the output space for the variables as 'valueset'. While many works have demonstrated generalization of GNNs across graph size, there has been no study on how to design a GNN for achieving valueset invariance for problems that come from the same domain. For example, learning to solve 16 x 16 sudoku after being trained on only 9 x 9 sudokus. In this work, we propose novel methods to extend GNN based architectures to achieve valueset invariance. Specifically, our model builds on recently proposed Recurrent Relational Networks. Our first approach exploits the graphsize invariance of GNNs by converting a multiclass node classification problem into a binary node classification problem. Our second approach works directly with multiple classes by adding multiple nodes corresponding to the values in the valueset, and then connecting variable nodes to value nodes depending on the problem initialization. Our experimental evaluation on three different combinatorial problems demonstrates that both our models perform well on our novel problem, compared to a generic neural reasoner. Between two of our models, we observe an inherent tradeoff while the binarized model gives better performance when trained on smaller valuesets, multivalued model is much more memory efficient, resulting in improved performance when trained on larger valuesets, where binarized model fails to train.
New results and open questions for SIRPH epidemic models with linear birth rate, loss of immunity, vaccination, and disease and vaccination fatalities ; Our paper presents three new classes of models SIRPH, SIRPHFA, and SIRPHIA, and states two problems we would like to solve about them. Recall that deterministic mathematical epidemiology has one basic general law, the R0 alternative of 52, 51, which states that the local stability condition of the disease free equilibrium may be expressed as R0 1, where R0 is the famous basic reproduction number, which plays also a major role in the theory of branching processes. The literature suggests that it is impossible to find general laws concerning the endemic points. However, it is quite common that 1. When R0 1, there exists a unique fixed endemic point, and 2. the endemic point is locally stable when R0 1. One would like to establish these properties for a large class of realistic epidemic models and we do not include here epidemics without casualties. We have introduced in 7, 5 a simple, but broad class of SIRPH models with varying population, with the express purpose of establishing for these processes the two properties above. Since that seemed still hard, we have introduced a further class of SIRPHFA models, which may be interpreted as approximations for the SIRPH models, and which includes simpler models typically studied in the literature with constant population, without loss of immunity, etc. The goal of our paper is to draw attention to the two open problems above, for the SIRPH, SIRPHFA, and also for a second, more refined intermediate approximation SIRPHIA. We illustrate the current statusquo by presenting new results on a generalization of the SAIRS epidemic model of 44, 40.
NebulaI A General Framework for Collaboratively Training Deep Learning Models on LowBandwidth Cloud Clusters ; The evergrowing model size and scale of compute have attracted increasing interests in training deep learning models over multiple nodes. However, when it comes to training on cloud clusters, especially across remote clusters, huge challenges are faced. In this work, we introduce a general framework, NebulaI, for collaboratively training deep learning models over remote heterogeneous clusters, the connections between which are lowbandwidth wide area networks WANs. We took natural language processing NLP as an example to show how NebulaI works in different training phases that include a pretraining a multilingual language model using two remote clusters; and b finetuning a machine translation model using knowledge distilled from pretrained models, which run through the most popular paradigm of recent deep learning. To balance the accuracy and communication efficiency, in NebulaI, parameterefficient training strategies, hybrid parallel computing methods and adaptive communication acceleration techniques are jointly applied. Meanwhile, security strategies are employed to guarantee the safety, reliability and privacy in intracluster computation and intercluster communication. NebulaI is implemented with the PaddlePaddle deep learning framework, which can support collaborative training over heterogeneous hardware, e.g. GPU and NPU. Experiments demonstrate that the proposed framework could substantially maximize the training efficiency while preserving satisfactory NLP performance. By using NebulaI, users can run largescale training tasks over cloud clusters with minimum developments, and the utility of existed large pretrained models could be further promoted. We also introduced new stateoftheart results on crosslingual natural language inference tasks, which are generated based upon a novel learning framework and NebulaI.
Models of human preference for learning reward functions ; The utility of reinforcement learning is limited by the alignment of reward functions with the interests of human stakeholders. One promising method for alignment is to learn the reward function from humangenerated preferences between pairs of trajectory segments, a type of reinforcement learning from human feedback RLHF. These human preferences are typically assumed to be informed solely by partial return, the sum of rewards along each segment. We find this assumption to be flawed and propose modeling human preferences instead as informed by each segment's regret, a measure of a segment's deviation from optimal decisionmaking. Given infinitely many preferences generated according to regret, we prove that we can identify a reward function equivalent to the reward function that generated those preferences, and we prove that the previous partial return model lacks this identifiability property in multiple contexts. We empirically show that our proposed regret preference model outperforms the partial return preference model with finite training data in otherwise the same setting. Additionally, we find that our proposed regret preference model better predicts real human preferences and also learns reward functions from these preferences that lead to policies that are better humanaligned. Overall, this work establishes that the choice of preference model is impactful, and our proposed regret preference model provides an improvement upon a core assumption of recent research. We have open sourced our experimental code, the human preferences dataset we gathered, and our training and preference elicitation interfaces for gathering a such a dataset.
ModelBased Disturbance Estimation for a FiberReinforced Soft Manipulator using Orientation Sensing ; For soft robots to work effectively in humancentered environments, they need to be able to estimate their state and external interactions based on proprioceptive sensors. Estimating disturbances allows a soft robot to perform desirable force control. Even in the case of rigid manipulators, force estimation at the endeffector is seen as a nontrivial problem. And indeed, other current approaches to address this challenge have shortcomings that prevent their general application. They are often based on simplified soft dynamic models, such as the ones relying on a piecewise constant curvature PCC approximation or matched rigidbody models that do not represent enough details of the problem. Thus, the applications needed for complex humanrobot interaction can not be built. Finite element methods FEM allow for predictions of soft robot dynamics in a more generic fashion. Here, using the soft robot modeling capabilities of the framework SOFA, we build a detailed FEM model of a multisegment soft continuum robotic arm composed of compliant deformable materials and fiberreinforced pressurized actuation chambers with a model for sensors that provide orientation output. This model is used to establish a state observer for the manipulator. Model parameters were calibrated to match imperfections of the manual fabrication process using physical experiments. We then solve a quadratic programming inverse dynamics problem to compute the components of external force that explain the pose error. Our experiments show an average force estimation error of around 1.2. As the methods proposed are generic, these results are encouraging for the task of building soft robots exhibiting complex, reactive, sensorbased behavior that can be deployed in humancentered environments.
Modeling Continuous Time Sequences with Intermittent Observations using Marked Temporal Point Processes ; A large fraction of data generated via human activities such as online purchases, health records, spatial mobility etc. can be represented as a sequence of events over a continuoustime. Learning deep learning models over these continuoustime event sequences is a nontrivial task as it involves modeling the everincreasing event timestamps, interevent time gaps, event types, and the influences between different events within and across different sequences. In recent years neural enhancements to marked temporal point processes MTPP have emerged as a powerful framework to model the underlying generative mechanism of asynchronous events localized in continuous time. However, most existing models and inference methods in the MTPP framework consider only the complete observation scenario i.e. the event sequence being modeled is completely observed with no missing events an ideal setting that is rarely applicable in realworld applications. A recent line of work which considers missing events while training MTPP utilizes supervised learning techniques that require additional knowledge of missing or observed label for each event in a sequence, which further restricts its practicability as in several scenarios the details of missing events is not known apriori. In this work, we provide a novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events. Specifically, we first model the generative processes of observed events and missing events using two MTPP, where the missing events are represented as latent random variables. Then, we devise an unsupervised training method that jointly learns both the MTPP by means of variational inference. Such a formulation can effectively impute the missing data among the observed events and can identify the optimal position of missing events in a sequence.
The bright side of the light curve a general photometric model of nontransiting exorings ; Rings around exoplanets exorings are one of the most expected discoveries in exoplanetary research. There is an increasing number of theoretical and observational efforts for detecting exorings, but none of them have succeeded yet. Most of those methods focus on the photometric signatures of exorings during transits, whereas less attention has been paid to light diffusely reflected what we denote here as the bright side of the light curve. This is particularly important when we cannot detect the typical stellar flux drop produced by transiting exoplanets. Here, we endeavour to develop a general method to model the variations on the light curves of both ringed nontransiting and transiting exoplanets. Our model dubbed as Pryngles simulates the complex interaction of luminous, opaque, and semitransparent objects in planetary systems, discretizing their surface with small circular plane discs that resemble sequins or spangles. We perform several numerical experiments with this model, and show its incredible potential to describe the light curve of complex systems under various orbital, planetary, and observational configurations of planets, moons, rings, or discs. As our model uses a very general approach, we can capture effects like shadows or planetaryring shine, and since the model is also modular we can easily integrate arbitrarily complex physics of planetary light scattering. A comparison against existing tools and analytical models of reflected light reveals that our model, despite its novel features, reliably reproduces light curves under common circumstances. Pryngles source code is written in PYTHON and made publicly available.
Understanding intraday price formation process by agentbased financial market simulation calibrating the extended chiarella model ; This article presents XGBChiarella, a powerful new approach for deploying agentbased models to generate realistic intraday artificial financial price data. This approach is based on agentbased models, calibrated by XGBoost machine learning surrogate. Following the Extended Chiarella model, three types of trading agents are introduced in this agentbased model fundamental traders, momentum traders, and noise traders. In particular, XGBChiarella focuses on configuring the simulation to accurately reflect real market behaviours. Instead of using the original ExpectationMaximisation algorithm for parameter estimation, the agentbased Extended Chiarella model is calibrated using XGBoost machine learning surrogate. It is shown that the machine learning surrogate learned in the proposed method is an accurate proxy of the true agentbased market simulation. The proposed calibration method is superior to the original ExpectationMaximisation parameter estimation in terms of the distance between historical and simulated stylised facts. With the same underlying model, the proposed methodology is capable of generating realistic price time series in various stocks listed at three different exchanges, which indicates the universality of intraday price formation process. For the time scale minutes chosen in this paper, one agent per category is shown to be sufficient to capture the intraday price formation process. The proposed XGBChiarella approach provides insights that the price formation process is comprised of the interactions between momentum traders, fundamental traders, and noise traders. It can also be used to enhance risk management by practitioners.
Anomaly detection optimization using big data and deep learning to reduce falsepositive ; Anomalybased Intrusion Detection System IDS has been a hot research topic because of its ability to detect new threats rather than only memorized signatures threats of signaturebased IDS. Especially after the availability of advanced technologies that increase the number of hacking tools and increase the risk impact of an attack. The problem of any anomalybased model is its high falsepositive rate. The high falsepositive rate is the reason why anomaly IDS is not commonly applied in practice. Because anomalybased models classify an unseen pattern as a threat where it may be normal but not included in the training dataset. This type of problem is called overfitting where the model is not able to generalize. Optimizing Anomalybased models by having a big training dataset that includes all possible normal cases may be an optimal solution but could not be applied in practice. Although we can increase the number of training samples to include much more normal cases, still we need a model that has more ability to generalize. In this research paper, we propose applying deep model instead of traditional models because it has more ability to generalize. Thus, we will obtain less falsepositive by using big data and deep model. We made a comparison between machine learning and deep learning algorithms in the optimization of anomalybased IDS by decreasing the falsepositive rate. We did an experiment on the NSLKDD benchmark and compared our results with one of the best used classifiers in traditional learning in IDS optimization. The experiment shows 10 lower falsepositive by using deep learning instead of traditional learning.
Multivariate Generalized Linear Mixed Models for Count Data ; Univariate regression models have rich literature for counting data. However, this is not the case for multivariate count data. Therefore, we present the Multivariate Generalized Linear Mixed Models framework that deals with a multivariate set of responses, measuring the correlation between them through random effects that follows a multivariate normal distribution. This model is based on a GLMM with a random intercept and the estimation process remains the same as a standard GLMM with random effects integrated out via Laplace approximation. We efficiently implemented this model through the TMB package available in R. We used Poisson, negative binomial NB, and COMPoisson distributions. To assess the estimator properties, we conducted a simulation study considering four different sample sizes and three different correlation values for each distribution. We achieved unbiased and consistent estimators for Poisson and NB distributions; for COMPoisson estimators were consistent, but biased, especially for dispersion, variance, and correlation parameter estimators. These models were applied to two datasets. The first concerns a sample from 30 different sites collected in Australia where the number of times each one of the 41 different ant species was registered; which results in an impressive 820 variancecovariance and 41 dispersion parameters estimated simultaneously, let alone the regression parameters. The second is from the Australia Health Survey with 5 response variables and 5190 respondents. These datasets can be considered overdispersed by the generalized dispersion index. The COMPoisson model overcame the other two competitors considering three goodnessoffit indexes. Therefore, the proposed model is capable of dealing with multivariate count data, and measuring any kind of correlation between them taking into account the effects of the covariates.
ClimaX A foundation model for weather and climate ; Most stateoftheart approaches for weather and climate modeling are based on physicsinformed numerical models of the atmosphere. These approaches aim to model the nonlinear dynamics and complex interactions between multiple variables, which are challenging to approximate. Additionally, many such numerical models are computationally intensive, especially when modeling the atmospheric phenomenon at a finegrained spatial and temporal resolution. Recent datadriven approaches based on machine learning instead aim to directly solve a downstream forecasting or projection task by learning a datadriven functional mapping using deep neural networks. However, these networks are trained using curated and homogeneous climate datasets for specific spatiotemporal tasks, and thus lack the generality of numerical models. We develop and demonstrate ClimaX, a flexible and generalizable deep learning model for weather and climate science that can be trained using heterogeneous datasets spanning different variables, spatiotemporal coverage, and physical groundings. ClimaX extends the Transformer architecture with novel encoding and aggregation blocks that allow effective use of available compute while maintaining general utility. ClimaX is pretrained with a selfsupervised learning objective on climate datasets derived from CMIP6. The pretrained ClimaX can then be finetuned to address a breadth of climate and weather tasks, including those that involve atmospheric variables and spatiotemporal scales unseen during pretraining. Compared to existing datadriven baselines, we show that this generality in ClimaX results in superior performance on benchmarks for weather forecasting and climate projections, even when pretrained at lower resolutions and compute budgets. The source code is available at httpsgithub.commicrosoftClimaX.
Stabilized training of joint energybased models and their practical applications ; The recently proposed Joint Energybased Model JEM interprets discriminatively trained classifier pyx as an energy model, which is also trained as a generative model describing the distribution of the input observations px. The JEM training relies on positive examples i.e. examples from the training data set as well as on negative examples, which are samples from the modeled distribution px generated by means of Stochastic Gradient Langevin Dynamics SGLD. Unfortunately, SGLD often fails to deliver negative samples of sufficient quality during the standard JEM training, which causes a very unbalanced contribution from the positive and negative examples when calculating gradients for JEM updates. As a consequence, the standard JEM training is quite unstable requiring careful tuning of hyperparameters and frequent restarts when the training starts diverging. This makes it difficult to apply JEM to different neural network architectures, modalities, and tasks. In this work, we propose a training procedure that stabilizes SGLDbased JEM training STJEM by balancing the contribution from the positive and negative examples. We also propose to add an additional regularization term to the training objective MI between the input observations x and output labels y which encourages the JEM classifier to make more certain decisions about output labels. We demonstrate the effectiveness of our approach on the CIFAR10 and CIFAR100 tasks. We also consider the task of classifying phonemes in a speech signal, for which we were not able to train JEM without the proposed stabilization. We show that a convincing speech can be generated from the trained model. Alternatively, corrupted speech can be denoised by bringing it closer to the modeled speech distribution using a few SGLD iterations. We also propose and discuss additional applications of the trained model.
Exploring contrast generalisation in deep learningbased brain MRItoCT synthesis ; Background Synthetic computed tomography sCT has been proposed and increasingly clinically adopted to enable magnetic resonance imaging MRIbased radiotherapy. Deep learning DL has recently demonstrated the ability to generate accurate sCT from fixed MRI acquisitions. However, MRI protocols may change over time or differ between centres resulting in lowquality sCT due to poor model generalisation. Purpose investigating domain randomisation DR to increase the generalisation of a DL model for brain sCT generation. Methods CT and corresponding T1weighted MRI withwithout contrast, T2weighted, and FLAIR MRI from 95 patients undergoing RT were collected, considering FLAIR the unseen sequence where to investigate generalisation. A Baseline'' generative adversarial network was trained withwithout the FLAIR sequence to test how a model performs without DR. Image similarity and accuracy of sCTbased dose plans were assessed against CT to select the bestperforming DR approach against the Baseline. Results The Baseline model had the poorest performance on FLAIR, with mean absolute error MAE106pm20.7 HU meanpmsigma. Performance on FLAIR significantly improved for the DR model with MAE99.0pm14.9 HU, but still inferior to the performance of the BaselineFLAIR model MAE72.6pm10.1 HU. Similarly, an improvement in gammapass rate was obtained for DR vs Baseline. Conclusions DR improved image similarity and dose accuracy on the unseen sequence compared to training only on acquired MRI. DR makes the model more robust, reducing the need for retraining when applying a model on sequences unseen and unavailable for retraining.
Capabilities of GPT4 on Medical Challenge Problems ; Large language models LLMs have demonstrated remarkable capabilities in natural language understanding and generation across various domains, including medicine. We present a comprehensive evaluation of GPT4, a stateoftheart LLM, on medical competency examinations and benchmark datasets. GPT4 is a generalpurpose model that is not specialized for medical problems through training or engineered to solve clinical tasks. Our analysis covers two sets of official practice materials for the USMLE, a threestep examination program used to assess clinical competency and grant licensure in the United States. We also evaluate performance on the MultiMedQA suite of benchmark datasets. Beyond measuring model performance, experiments were conducted to investigate the influence of test questions containing both text and images on model performance, probe for memorization of content during training, and study probability calibration, which is of critical importance in highstakes applications like medicine. Our results show that GPT4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier generalpurpose models GPT3.5 as well as models specifically finetuned on medical knowledge MedPaLM, a prompttuned version of FlanPaLM 540B. In addition, GPT4 is significantly better calibrated than GPT3.5, demonstrating a muchimproved ability to predict the likelihood that its answers are correct. We also explore the behavior of the model qualitatively through a case study that shows the ability of GPT4 to explain medical reasoning, personalize explanations to students, and interactively craft new counterfactual scenarios around a medical case. Implications of the findings are discussed for potential uses of GPT4 in medical education, assessment, and clinical practice, with appropriate attention to challenges of accuracy and safety.
TinyStories How Small Can Language Models Be and Still Speak Coherent English ; Language models LMs are powerful tools for natural language processing, but they often struggle to produce coherent and fluent text when they are small. Models with around 125M parameters such as GPTNeo small or GPT2 small can rarely generate coherent and consistent English text beyond a few words even after extensive training. This raises the question of whether the emergence of the ability to produce coherent English text only occurs at larger scales with hundreds of millions of parameters or more and complex architectures with many layers of global attention. In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4yearolds usually understand, generated by GPT3.5 and GPT4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the stateoftheart models below 10 million total parameters, or have much simpler architectures with only one transformer block, yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities. We also introduce a new paradigm for the evaluation of language models We suggest a framework which uses GPT4 to grade the content generated by these models as if those were stories written by students and graded by a human teacher. This new paradigm overcomes the flaws of standard benchmarks which often requires the model's output to be very structures, and moreover provides a multidimensional score for the model, providing scores for different capabilities such as grammar, creativity and consistency. We hope that TinyStories can facilitate the development, analysis and research of LMs, especially for lowresource or specialized domains, and shed light on the emergence of language capabilities in LMs.
Unifying Machine Vision via Counterfactual World Modeling ; Leading approaches in machine vision employ different architectures for different tasks, trained on costly taskspecific labeled datasets. This complexity has held back progress in areas, such as robotics, where robust taskgeneral perception remains a bottleneck. In contrast, foundation models of natural language have shown how large pretrained neural networks can provide zeroshot solutions to a broad spectrum of apparently distinct tasks. Here we introduce Counterfactual World Modeling CWM, a framework for constructing a visual foundation model a unified, unsupervised network that can be prompted to perform a wide variety of visual computations. CWM has two key components, which resolve the core issues that have hindered application of the foundation model concept to vision. The first is structured masking, a generalization of masked prediction methods that encourages a prediction model to capture the lowdimensional structure in visual data. The model thereby factors the key physical components of a scene and exposes an interface to them via small sets of visual tokens. This in turn enables CWM's second main idea counterfactual prompting the observation that many apparently distinct visual representations can be computed, in a zeroshot manner, by comparing the prediction model's output on real inputs versus slightly modified counterfactual inputs. We show that CWM generates highquality readouts on realworld images and videos for a diversity of tasks, including estimation of keypoints, optical flow, occlusions, object segments, and relative depth. Taken together, our results show that CWM is a promising path to unifying the manifold strands of machine vision in a conceptually simple foundation.
Amortized Variational Inference When and Why ; Amortized variational inference AVI is a method for approximating the intractable posterior distributions that arise in probabilistic models. The defining feature of AVI is that it learns a global inference function that maps each observation to its local latent variable's approximate posterior. This stands in contrast to the more classical factorized or meanfield variational inference FVI, which directly learns the parameters of the approximating distribution for each latent variable. In deep generative models, AVI is used as a computational trick to speed up inference for local latent variables. In this paper, we study AVI as a general alternative to FVI for approximate posterior inference. AVI cannot produce an approximation with a lower KullbackLeibler divergence than FVI's optimal solution, because the amortized family is a subset of the factorized family. Thus a central theoretical problem is to characterize when AVI still attains FVI's optimal solution. We derive conditions on both the model and the inference function under which AVI can theoretically achieve FVI's optimum. We show that for a broad class of hierarchical models, including deep generative models, it is possible to close the gap between AVI and FVI. Further, for an even broader class of models, we establish when and how to expand the domain of the inference function to make amortization a feasible strategy. Finally, we prove that for certain models including hidden Markov models and Gaussian processes AVI cannot match FVI's solution, no matter how expressive the inference function is. We also study AVI empirically. On several examples, we corroborate our theoretical results and investigate the performance of AVI when varying the complexity of the inference function. When the gap between AVI and FVI can be closed, we find that the required complexity of the function need not scale with the number of observations, and that AVI often converges faster than FVI.
Occupancy Grid Map to Pose Graphbased Map Robust BIMbased 2DLiDAR Localization for Lifelong Indoor Navigation in Changing and Dynamic Environments ; Several studies rely on the de facto standard Adaptive Monte Carlo Localization AMCL method to localize a robot in an Occupancy Grid Map OGM extracted from a building information model BIM model. However, most of these studies assume that the BIM model precisely represents the real world, which is rarely true. Discrepancies between the reference BIM model and the real world ScanBIM deviations are not only due to furniture or clutter but also the usual asplanned and asbuilt deviations that exist with any model created in the design phase. These deviations affect the accuracy of AMCL drastically. This paper proposes an opensource method to generate appropriate Pose Graphbased maps from BIM models for robust 2DLiDAR localization in changing and dynamic environments. First, 2D OGMs are automatically generated from complex BIM models. These OGMs only represent structural elements allowing indoor autonomous robot navigation. Then, an efficient technique converts these 2D OGMs into Pose Graphbased maps enabling more accurate robot pose tracking. Finally, we leverage the different map representations for accurate, robust localization with a combination of stateoftheart algorithms. Moreover, we provide a quantitative comparison of various stateoftheart localization algorithms in three simulated scenarios with varying levels of ScanBIM deviations and dynamic agents. More precisely, we compare two Particle Filter PF algorithms AMCL and General Monte Carlo Localization GMCL; and two Graphbased Localization GBL methods Google's Cartographer and SLAM Toolbox, solving the global localization and pose tracking problems. The numerous experiments demonstrate that the proposed method contributes to a robust localization with an asdesigned BIM model or a sparse OGM in changing and dynamic environments, outperforming the conventional AMCL in accuracy and robustness.
Extended Inflation with a CurvatureCoupled Inflaton ; We examine extended inflation models enhanced by the addition of a coupling between the inflaton field and the spacetime curvature. We examine two types of model, where the underlying inflaton potential takes on secondorder and firstorder form respectively. One aim is to provide models which satisfy the solar system constraints on the BransDicke parameter omega. This constraint has proven very problematic in previous extended inflation models, and we find circumstances where it can be successfully evaded, though the constraint must be carefully assessed in our model and can be much stronger than the usual omega 500. In the simplest versions of the model, one may avoid the need to introduce a mass for the BransDicke field in order to ensure that it takes on the correct value at the present epoch, as seems to be required in hyperextended inflation. We also briefly discuss aspects of the formation of topological defects in the inflaton field itself.
A Family of Models for Spherical Stellar Systems ; We describe a oneparameter family of models of stable spherical stellar systems in which the phasespace distribution function depends only on energy. The models have similar density profiles in their outer parts rhopropto r4 and central powerlaw density cusps, rhopropto r3eta, 0etale 3. The family contains the Jaffe 1983 and Hernquist 1990 models as special cases. We evaluate the surface brightness profile, the lineofsight velocity dispersion profile, and the distribution function, and discuss analogs of King's corefitting formula for determining masstolight ratio. We also generalize the models to a twoparameter family, in which the galaxy contains a central black hole; the second parameter is the mass of the black hole. Our models can be used to estimate the detectability of central black holes and the velocitydispersion profiles of galaxies that contain central cusps, with or without a central black hole.
Spectral Properties of Blast Wave Models of GammaRay Burst Sources ; We calculate the spectrum of blast wave models of gammaray burst sources, for various assumptions about the magnetic field density and the relativistic particle acceleration efficiency. For a range of physically plausible models we find that the radiation efficiency is high, and leads to nonthermal spectra with breaks at various energies comparable to those observed in the gammaray range. Radiation is also predicted at other wavebands, in particular at Xray, opticalUV and GeVTeV energies. We discuss the spectra as a function of duration for three basic types of models, and for cosmological, halo and galactic disk distances. We also evaluate the gammaray fluences and the spectral characteristics for a range of external densities. Impulsive burst models at cosmological distances can satisfy the conventional Xray paucity constraint SxSgamma siml few percent over a wide range of durations, but galactic models can do so only for bursts shorter than a few seconds, unless additional assumptions are made. The emissivity is generally larger for bursts in a denser external environment, with the efficiency increasing up to the point where all the energy input is radiated away.
Genus statistics for structure formation with topological defects ; We study the efficiency of genus statistics in differentiating between different models of structure formation. Simple models which reproduce the salient features of the structure seeded by topological defects are examined. We consider accretion onto static point masses, modeling slowmoving cosmic string loops or other primordial pointlike sources. Filamentary structures and wakes are considered as models of the structures seeded by slow and fast moving string, respectively. Comparison is made with predictions of genus statistics for Gaussian fluctuations and with genus curves obtained by the CfA survey. A generic class of density models with wakes and filaments is found to provide results comparable or better than Gaussian models for this suite of tests.
Complete power spectrum for an induced gravity open inflation model ; We study the phenomenological constraints on a recently proposed model of open inflation in the context of induced gravity. The main interest of this model is the relatively small number of parameters, which may be constrained by many different types of observation. We evaluate the complete spectrum of density perturbations, which contains continuum subcurvature modes, a discrete super curvature mode, and a mode associated with fluctuations in the bubble wall. From these, we compute the angular power spectrum of temperature fluctuations in the microwave background, and derive bounds on the parameters of the model so that the predicted spectrum is compatible with the observed anisotropy of the microwave background and with largescale structure observations. We analyze the matter era and the approach of the model to general relativity. The model passes all existing constraints.
Opacity effects on the solar interior. I. Solar structure ; Despite recent major advances, the opacity remains a source of substantial uncertainty in the calculation of solar models, and hence of solar oscillation frequencies. Hence it is of substantial interest to investigate the sensitivity of solar structure to changes in the opacity. Furthermore, we may hope from the precise helioseismic inferences of solar structure to obtain information about possible corrections to the opacities used in the model calculation. Here we carry out detailed calculations of the influence on solar models of changes in the opacity, including also evolutionary effects. We find that over the relevant range the response of the model is approximately linear in the opacity change, allowing the introduction of opacity kernels relating a general opacity change to the corresponding model changes. Changes in the convection zone can be characterized entirely by the change in the initial composition and mixing length required to calibrate the model.
An Extendable Galaxy Number Count Model ; I review galaxy number count models and present ncmod, an extendable and general purpose model for comparing and interpreting the results of field galaxy survey data. I develop techniques and software for converting the results of a survey done in one filter into another filter, for direct comparison with other surveys. Comparison of the data from surveys which differ greatly in wavelength coverage or sensitivity is of necessity modeldependent, but comparison between similar surveys can be done in a relatively modelindependent way. I extrapolate existing number counts into the ultraviolet and thermal infrared. The model is used to predict the results of future space missions, including STIS and NICMOS on HST, ISO, SIRTF and NGST.
An excursion set model for the distribution of dark matter and dark matter haloes ; A model of the gravitationally evolved dark matter distribution, in the Eulerian space, is developed. It is a simple extension of the excursion set model that is commonly used to estimate the mass function of collapsed dark matter haloes. In addition to describing the evolution of the dark matter itself, the model allows one to describe the evolution of the Eulerian space distribution of the haloes. It can also be used to describe density profiles, on scales larger than the virial radius, of these haloes, and to quantify the way in which matter flows in and out of Eulerian cells. When the initial Lagrangian space distribution is white noise Gaussian, the model suggests that the Inverse Gaussian distribution should provide a reasonably good approximation to the evolved Eulerian density field, in agreement with numerical simulations. Application of this model to clustering from more general Gaussian initial conditions is discussed at the end.
Evidence for a Massive Black Hole in the S0 Galaxy NGC 4342 ; We have constructed axisymmetric dynamical models of the edgeon S0 galaxy NGC 4342 simple twointegral Jeans models as well as fully general, threeintegral models using a modified version of Schwarzschild's orbit superposition technique. The twointegral models suggest a black hole BH of 3 or 6times 108 Modot, depending on the data set. The threeintegral models can fit all groundbased and HST data simultaneously, but only when a central BH is included. Models without BH are ruled out at better than 99.73 confidence level. We determine a BH mass of 3.01.71.0 times 108 Modot. This corresponds to 2.6 of the bulge mass, making NGC 4342 one of the galaxies with the highest BH mass to bulge mass ratio currently known.
On the Dynamical Foundations of Alpha Disks ; The dynamical foundations of alpha disk models are described. At the heart of the viscous formalism of accretion disk models are correlations in the fluctuating components of the disk velocity, magnetic field, and gravitational potential. We relate these correlations to the large scale mean flow dynamics used in phenomenological viscous disk models. MHD turbulence readily lends itself to the alpha formalsim, but transport by selfgravity does not. Nonlocal transport is an intrinsic property of turbulent selfgravitating disks, which in general cannot be captured by an alpha model. Local energy dissipation and alphalike behavior can be reestablished if the pattern speeds associated with the amplitudes of an azimuthal Fourier decomposition of the turbulence are everywhere close to the local rotation frequency. In this situation, global wave transport must be absent. Shearing box simulations, which employ boundary conditions forcing local behavior, are probably not an adequate tool for modeling the behavior of selfgravitating disks. As a matter of principle, it is possible that disks which hover near the edge of gravitational stability may behave in accord with a local alpha model, but global simulations performed to date suggest matters are not this simple.
Constraints on structure formation models from SunyaevZel'dovich Effect ; In the context of cold dark matter CDM cosmological models, we have simulated images of the brightness temperature fluctuations in the cosmic microwave background CMBsky owing to the Sunyaev Zelprimedovich SZ effect in a cosmological distribution of clusters. We compare the image statistics with recent ATCA limits on arcminscale CMB anisotropy. The SZ effect produces a generically nonGaussian field and we compute the variance in the simulated temperatureanisotropy images, after convolution with the ATCA beam pattern, for different cosmological models. All the models are normalised to the 4year it COBE data. We find an increase in the simulatedsky temperature variance with increase in the cosmological density parameter Omega0. A comparison with the upper limits on the sky variance set by the ATCA appears to rule out our closeduniverse model lowOmega0 openuniverse models are preferred. The result is independent of any present day observations of sigma8.
Selfenrichment in Omega Centauri ; The origin of abundance spreads observed in omega Centauri is studied in the context of the selfenrichment scenario. Five chemical evolution models are constructed and are compared with empirical metallicity distribution of omega Cen. After a series of simulations, it is found that neither of closedbox, outflow, nor infall models can reproduce the empirical metallicity distribution of omega Cen, while a modified outflow model with a bimodal initial mass function IMF gives a metallicity distribution that fits closely to the empirical ones. In the modified outflow model, longlived stars are assumed to form after the first explosion of type II supernovae SNII in a protocloud. The modified outflow model involves gas infall at the very first chemical evolution. Thus we conclude that selfenrichment causes the abundance dispersion in omega Cen. A success of the outflow model with the bimodal IMF implies that low mass stars in a globular cluster GC should have formed in the gas already enriched by the first generation of SNII. This scenario, originally proposed by Cayrel 1986, can explain a lack of globular clusters with FeH 2.2 in the Milky Way Galaxy.
New Constraints on inflation from the Cosmic Microwave Background ; The recent data from the Boomerang and MAXIMA1 balloon flights have marked the beginning of the precision era of Cosmic Microwave Background anisotropy CMB measurements. We investigate the observational constraints from the current CMB anisotropy measurements on the simplest inflation models, characterized by a single scalar field phi, in the parameter space consisting of scalar spectral index nS and tensorscalar ratio r. If we include constraints on the baryon density from big bang nucleosynthesis BBN, we show that the favored inflationary models have negligible tensor amplitude and a red'' tilt, with a best fit of nS simeq 0.93, which is consistent with the simplest smallfield'' inflation models, but rules out largefield models at the 1sigma level. Without including BBN constraints, a broader range of models are consistent with the data. The best fit assuming negligible reionization is a scaleinvariant spectrum, nS simeq 1, which includes largefield and hybrid scenarios. Largefield models such as chaotic and powerlaw inflation with tilt nS 0.9 are strongly disfavored in all cases.
Qualitative Properties of Magnetic Fields in Scalar Field Cosmology ; We study the qualitative properties of the class of spatially homogeneous Bianchi VIo cosmological models containing a perfect fluid with a linear equation of state, a scalar field with an exponential potential and a uniform cosmic magnetic field, using dynamical systems techniques. We find that all models evolve away from an expanding massless scalar field model in which the matter and the magnetic field are negligible dynamically. We also find that for a particular range of parameter values the models evolve towards the usual powerlaw inflationary model with no magnetic field and, furthermore, we conclude that inflation is not fundamentally affected by the presence of a uniform primordial magnetic field. We investigate the physical properties of the Bianchi I magnetic field models in some detail.
The Density Profile of Clusterscale Dark Matter Halos ; We measure the average gravitational shear profile of 6 massive clusters Mvir 1015 Msun at z0.3 out to a radius 2h1 Mpc. The measurements are fitted to a generalized NFWlike halo model rhor with an arbitrary r 0 slope alpha. The data are well fitted by such a model with a central cusp with alpha 0.9 1.6 68 confidence interval. For the standardNFW case alpha 1.0, we find a concentration parameter cvir that is consistent with recent predictions from highresolution CDM Nbody simulations. Our data are also well fitted by an isothermal sphere model with a softened core. For this model, our 1sigma upper limit for the core radius corresponds to a limit sigmastar leq 0.1 cm2 g1 on the elastic collision crosssection in a selfinteracting dark matter model.
Stability of rotating spherical stellar systems ; The stability of rotating isotropic spherical stellar systems is investigated by using Nbody simulations. Four spherical models with realistic density profiles are studied one of them fits the luminosity profile of globular clusters, while the remaining three models provide good approximations to the surface brightness of elliptical galaxies. The phasespace distribution function fE of each one of these nonrotating models satisfies the sufficient condition for stability dfdE 0. Different amounts of rotation are introduced in these models by changing the sign of the zcomponent of the angular momentum for a given fraction of the particles. Numerical simulations show that all these rotating models are stable to both radial and nonradial perturbations, irrespective of their degree of rotation. These results suggest that rotating isotropic spherical models with realistic density profiles might generally be stable. Furthermore, they show that spherical stellar systems can rotate very rapidly without becoming oblate.
Gravitational microlensing as a test of stellar model atmospheres ; We present calculations illustrating the potential of gravitational microlensing to discriminate between classical models of stellar surface brightness profiles and the recently computed Next Generation'' models of Hauschildt et al. These sphericallysymmetric models include a much improved treatment of molecular lines in the outer atmospheres of cool giants stars which are very typical sources in Galactic bulge microlensing events. We show that the microlensing signatures of intensively monitored point and fold caustic crossing events are readily able to distinguish between NextGen and the classical models, provided a photometric accuracy of 0.01 magnitudes is reached. This accuracy is now routinely achieved by alert networks, and hence current observations can discriminate between such model atmospheres, providing a unique insight on stellar photospheres.
Optical afterglows of short Gammaray Bursts and GRB 040924 ; Shortduration Gammaray bursts GRBs leq 2rm s have remained a mystery due to the lack of afterglow detection until recently. The models to interpret short GRBs invoke distinct progenitor scenarios. Here we present a generic analysis of short GRB afterglows, and calculate the optical lightcurves of short GRBs within the framework of different progenitor models. We show that all these optical afterglows are bright enough to be detected by the Ultraviolet and Optical Telescope UVOT on board the em Swift observatory, and that different models could be distinguished with a wellmonitored lightcurve. We also model the afterglow data of the recently discovered short burst GRB 040924. We find that the limited data are consistent with a low medium density environment which is consistent with the preconcept of the compactstar merger progenitor model, although the models with a collapsar progenitor are not ruled out.
Dark Energy, ScalarTensor Gravity and Large Extra Dimensions ; We explore in detail a dilatonic scalartensor theory of gravity inspired by large extra dimensions, where a radion field from compact extra dimensions gives rise to quintessence in our 4dimensional world. We show that the model can give rise to other types of cosmologies as well, some more akin to kessence and possibly variants of phantom dark energy. In our model the field or radius stabilization arises from quantum corrections to the effective 4D Ricci scalar. We then show that various constraints nearly determine the model parameters, and give an example of a quintessencetype cosmology consistent with observations. We show that the upcoming SNAPexperiment would easily distinguish the present model from a constant Lambda model with an emphequal amount of dark energy, but that the SNAPdata alone will not be able distinguish it from a Lambda model with about 5 less dark energy.
Emissionline profile modelling of structured T Tauri magnetospheres ; We present hydrogen emission line profile models of magnetospheric accretion onto Classical T Tauri stars. The models are computed under the Sobolev approximation using the threedimensional Monte Carlo radiativetransfer code TORUS. We have calculated four illustrative models in which the accretion flows are confined to azimuthal curtains a geometry predicted by magnetohydrodynamical simulations. Properties of the line profile variability of our models are discussed, with reference to dynamic spectra and crosscorrelation images. We find that some gross characteristics of observed line profile variability are reproduced by our models, although in general the level of variability predicted is larger than that observed. We conclude that this excessive variability probably excludes dynamical simulations that predict accretion flows with low degrees of axisymmetry.
Spherical galaxy models with power law logarithmic slope ; We present a new family of spherically symmetric models for the luminous components of elliptical and spiral galaxies and their dark matter haloes. Our starting point is a general expression for the logarithmic slope alphar dlogrhodlogr from which most of the cuspy models yet available in literature may be derived. We then dedicate our attention to a particular set of models whose logarithmic slope is a power law function of the radius r investigating in detail their dynamics assuming isotropy in the velocity space. While the basic properties such as the density profile and the gravitational potential may be expressed analytically, both the distribution function and the observable quantities surface brightness and line of sight velocity dispersion have to be evaluated numerically. We also consider the extension to anisotropic models trying two different parameterization. Since the model recently proposed by Navarro et al. 2004 as the best fit to their sample of numerically simulated haloes belongs to the family we present here, analytical approximations are given for the most useful quantities.
Einsteinde Sitter model reexamined for the newly discovered SNe Ia ; Consistency of Einsteinde Sitter model with the recently observed SNe Ia by the Hubble Space Telescope is examined. The model shows a reasonable fit to the observation, if one takes into account the extinction of SNe light by the intergalactic metallic dust ejected from the SNe explosions. Though the fit to the new data is worsened considerably compared with the earlier data, it can still be regarded acceptable. We should wait for more accurate observations at higher redshifts as expected from the coming space missions such as SNAP and JWST in order to rule out a model, which seems to explain all the other existing observations well some even better than the favoured LambdaCDM model, is consistent with beautiful theoretical ideas like inflation and cold dark matter, and is not as speculative as the models of dark energy.
Cosmography, Decelerating Past, and Cosmological Models Learning the Bayesian Way ; In this paper, using a significantly improved version of the modelindependent, cosmographic approach to cosmology John, M. V. 2004, ApJ, 614, 1, we address an important question Was there a decelerating past for the universe To answer this, the Bayes's probability theory is employed, which is the most appropriate tool for quantifying our knowledge when it changes through the acquisition of new data. The cosmographic approach helps to sort out the models in which the universe was always accelerating from those in which it decelerated for at least some time in the period of interest. Bayesian model comparison technique is used to discriminate these rival hypotheses with the aid of recent releases of supernova data. We also attempt to provide and improve another example of Bayesian model comparison, performed between some Friedmann models, using the same data. Our conclusion, which is consistent with other approaches, is that the apparent magnituderedshift data alone cannot discriminate these competing hypotheses. We also argue that the lessons learnt using Bayesian theory are extremely valuable to avoid frequent Uturns in cosmology.
Observational Constraints on a Variable Dark Energy Model ; We study the effect of a phenomenological parameterized quintessence model on low, intermediate and high redshift observations. At low and intermediate redshifts, we use the Gold sample of supernova Type Ia SNIa data and recently observed size of baryonic acoustic peak from Sloan Digital Sky Survey SDSS, to put constraint on the parameters of the quintessence model. At the high redshift, the same fitting procedure is done using WAMP data, comparing the location of acoustic peak with that obtain from the dark energy model. As a complementary analysis in a flat universe, we combine the results from the SNIa, CMB and SDSS. The best fit values for the model parameters are Omegam 0.270.020.02 the present matter content and w01.450.350.60 dark energy equation of state. Finally we calculate the age of universe in this model and compare it with the age of old stars and high redshift objects
Spontaneous Isotropy Breaking A Mechanism for CMB Multipole Alignments ; We introduce a class of models in which statistical isotropy is broken spontaneously in the CMB by a nonlinear response to longwavelength fluctuations in a mediating field. These fluctuations appear as a gradient locally and pick out a single preferred direction. The nonlinear response imprints this direction in a range of multipole moments. We consider two manifestations of isotropy breaking additive contributions and multiplicative modulation of the intrinsic anisotropy. Since WMAP exhibits an alignment of power deficits, an additive contribution is less likely to produce the observed alignments than the usual isotropic fluctuations, a fact which we illustrate with an explicit cosmological model of longwavelength quintessence fluctuations. This problem applies to other models involving foregrounds or background anisotropy that seek to restore power to the CMB. Additive models that account directly for the observed power exacerbate the low power of the intrinsic fluctuations. Multiplicative models can overcome these difficulties. We construct a proof of principle model that significantly improves the likelihood and generates stronger alignments than WMAP in 3045 of realizations.
Possible extensions of the standard cosmological model anisotropy, rotation, and magnetic field ; We show that the difference between the theoretically expected and measured by WMAP amplitude of the quadrupole fluctuations of CMB can be related to the impact of the anisotropic curvature of the homogeneous universe dominated by the dark energy. In such universe the matter expansion becomes practically isotropic just after the period of inflation and only at small redshifts the anisotropic expansion is generated again by the small curvature OmegaK1 Omegam OmegaLambda leq 104. For such models the possible deviations from the parameters derived for the standard cosmological model are evidently negligible but the correlations of large scale perturbations and distortions of their Gaussianity are possible. Such models are also compatible with existence of a homogeneous magnetic field and matter rotation which contribute to the low ell anisotropy and can be considered as hidden parameters'' of the model. Their influence can be observed as, for example, special correlations of small scale fluctuations and the Faraday rotation of the CMB and radiation of the farthest quasars. However, both the magnetic field and matter rotation require also modifications of the simple models of isotropic inflation and they change the evolutionary history of the early Universe.
Natural Phantom Dark Energy, Wiggling Hubble Parameter Hz and Direct Hz Data ; Recent direct Hz data indicate that the parameter Hz may wiggle with respect to z. On the other hand the luminosity distance data of supernovae flatten the wiggles of Hz because of integration effect. It is expected that the fitting results can be very different in a model permitting a wiggling Hz because the data of supernovae is highly degenerated to such a model. As an example the natural phantom dark energy is investigated in this paper. The dynamical property of this model is studied. The model is fitted by the direct Hz data set and the SNLS data set, respectively. And the results are quite different, as expected. The quantum stability of this model is also shortly discussed. We find it is a viable model if we treat it as an effective theory truncated by an upperbound.
A Binary Model for the UVupturn of Elliptical Galaxies ; The discovery of an excess of light in the farultraviolet UV in 1969 in elliptical galaxies was a major surprise. While it is now clear that this UV excess UVupturn is probably caused by an old population of heliumburning stars. Han et al 2002, 2003 proposed a binary model for the formation of hot subdwarfs helium burning stars and the model can reproduce the observations in our Galaxy. By applying the binary model to the study of evolutionary population synthesis, we have obtained an it a priori model for the UVupturn of elliptical galaxies. The model shows that the UVupturn is most likely resulted from binary interactions and it is universal not very much metallicitydependant in ellipticals. This has major implications for understanding the evolution of the UVupturn and elliptical galaxies in general; contrary to previous postulates, it implies that the UVupturn is not a sign of age, but could be a potentially powerful indicator for a recent minor burst of starforming activity.
Contextual Information and Specific Language Models for Spoken Language Understanding ; In this paper we explain how contextual expectations are generated and used in the taskoriented spoken language understanding system Dialogos. The hard task of recognizing spontaneous speech on the telephone may greatly benefit from the use of specific language models during the recognition of callers' utterances. By 'specific language models' we mean a set of language models that are trained on contextually appropriated data, and that are used during different states of the dialogue on the basis of the information sent to the acoustic level by the dialogue management module. In this paper we describe how the specific language models are obtained on the basis of contextual information. The experimental result we report show that recognition and understanding performance are improved thanks to the use of specific language models.
Language Modelling For TaskOriented Domains ; This paper is focused on the language modelling for taskoriented domains and presents an accurate analysis of the utterances acquired by the Dialogos spoken dialogue system. Dialogos allows access to the Italian Railways timetable by using the telephone over the public network. The language modelling aspects of specificity and behaviour to rare events are studied. A technique for getting a language model more robust, based on sentences generated by grammars, is presented. Experimental results show the benefit of the proposed technique. The increment of performance between language models created using grammars and usual ones, is higher when the amount of training material is limited. Therefore this technique can give an advantage especially for the development of language models in a new domain.
Heisenberg models and a particular isotropic model ; The Heisenberg model, a quantum mechanical analogue of the Ising model, has a large ground state degeneracy, due to the symmetry generated by the total spin. This symmetry is also responsible for degeneracies in the rest of the spectrum. We discuss the global structure of the spectrum of Heisenberg models with arbitrary couplings, using group theoretical methods. The Hilbert space breaks up in blocks characterized by the quantum numbers of the total spin, S and M, and each block is shown to constitute the representation space of an explicitly given irreducible representation of the symmetric group SN, consisting of permutations of the N spins in the system. In the second part of the paper we consider, as a concrete application, the model where each spin is coupled to all the other spins with equal strength. Its partition function is written as a single integral, elucidating its Ndependence. This provides a useful framework for studying finite size effects. We give explicit results for the heat capacity, revealing interesting behavior just around the phase transition.
The threestate Potts model on a triangular lattice ; We study the phase diagram of the threestate Potts model on a triangular lattice with general interactions ferroantiferromagnetic between nearest neighbor spins. When the interactions along two latticevector directions are antiferromagnetic and infinitely strong, this model becomes equivalent to a sixvertex model and exhibits a firstorder KDP transition from an ordered phase into a critical phase. Comparing the excitations occurred by relaxing the restriction of infinitestrength interactions and those in the eightvertex model, we analytically obtain the critical index for those excitations and demonstrate the existence of a critical phase for the case of finite antiferromagnetic interactions in two directions and ferromagnetic interactions in the other direction. When the interactions are antiferromagnetic in all three directions, Monte Carlo simulations show that a firstorder line emerges from the KDP point and separates completely an ordered phase and a disordered phase. Along the special line where all three antiferromagnetic interactions have the same strength, the cellspin analysis reveals that the symmetry of the ground states is dual to the symmetry of the n3 ferromagnetic cubic model which is known to exhibit a firstorder phase transition.
A Note on Dressed SMatrices in Models with LongRange Interactions ; The sl dressed Scattering matrix describing scattering of quasiparticles in various models with longrange interactions is evaluated by means of Korepin's methodupref vek1. For models with 1oversin2rinteractions the Smatrix is found to be a momentumindependent phase, which clearly demonstrates the ideal gas character of the quasiparticles in such models. We then determine Smatrices for some models with 1oversinh2rinteraction and find them to be in general nontrivial. For the 1over r2limit of the 1oversinh2rinteraction we recover trivial Smatrices, thus exhibiting a crossover from interacting to noninteracting quasiparticles. The relation of the Smatrix to fractional statistics is discussed.
Exact solutions of a restricted ballistic deposition model on a onedimensional staircase ; Surface structure of a restricted ballistic depositionRBD model is examined on a onedimensional staircase with free boundary conditions. In this model, particles can be deposited only at the steps of the staircase. We set up recurrence relations for the surface fluctuation width W using generating function method. Steadystate solutions are obtained exactly given system size L. In the infinitesize limit, W diverges as Lalpha with the scaling exponent alphafrac12. The dynamic exponent beta Wsim tbeta is also found to be frac12 by solving the recurrence relations numerically. This model can be viewed as a simple variant of the model which belongs to the KardarParisiZhang KPZ universality class alphaKPZ frac12 , betaKPZfrac13. Comparing its deposition time scale with that of the singlestep model, we argue that beta must be the same as betaKPZ1betaKPZ, which is consistent with our finding.
Origin of Intrinsic Josephson Coupling in the Cuprates and Its Relation to Order Parameter Symmetry An Incoherent Hopping Model ; Experiments on the cuprate superconductors demonstrate that these materials may be viewed as a stack of Josephson junctions along the cdirection. In this paper, we present a model which describes this intrinsic Josephson coupling in terms of incoherent quasiparticle hopping along the caxis arising from wavefunction overlap, impurityassisted hopping, and bosonassisted hopping. We use this model to compute the magnitude and temperature T dependence of the resulting Josephson critical current jc T for s and dwave superconductors. Contrary to other approaches, dwave pairing in this model is compatible with an intrinsic Josephson effect at all hole concentrations and leads to jc T propto T at low T. By parameterizing our theory with caxis resistivity data from YBCO, we estimate jc T for optimally doped and underdoped members of this family. Our estimates suggest that further experiments on this compound would be of great help in elucidating the validity of our model in general and the pairing symmetry in particular. We also discuss the implications of our model for LSCO and BSCCO.
Pedestrian Approach to the TwoChannel Kondo Model ; We reformulate the twochannel Kondo model to explicitly remove the unscattered charge degrees of freedom. This procedure permits us to move the nonFermi liquid fixed point to infinite coupling where we can apply a perturbative strongcoupling expansion. The fixed point Hamiltonian involves a threebody Majorana zero mode whose scattering effects give rise to marginal selfenergies. The compactified model is the N3 member of a family of ON Kondo models that can be solved by semiclassical methods in the large N limit. For odd N, em fermionic Kink fluctuations about the Ninfty meanfield theory generate a fermionic Nbody boundstate which asymptotically decouples at low energies. For N3, our semiclassical methods fully recover the nonFermi liquid physics of the original two channel model. Using the same methods, we find that the corresponding O3 Kondo lattice model develops a spingap and a gapless band of coherently propagating threebody boundstates. Its strongcoupling limit offers a rather interesting realization of marginal Fermi liquid behavior.
Mutual Exclusion Statistics in Exactly Solvable Models in One and Higher Dimensions at Low Temperatures ; We study statistical characterization of the manybody states in exactly solvable models with internal degrees of freedom. The models under consideration include the isotropic and anisotropic Heisenberg spin chain, the Hubbard chain, and a model in higher dimensions which exhibits the Mott metalinsulator transition. It is shown that the ground state of these systems is all described by that of a generalized ideal gas of particles called exclusons which have mutual exclusion statistics, either between different rapidities or between different species. For the Bethe ansatz solvable models, the low temperature properties are well described by the excluson description if the degeneracies due to string solutions with complex rapidities are taken into account correctly. For the Hubbard chain with strong but finite coupling, chargespin separation is shown for thermodynamics at low temperatures. Moreover, we present an exactly solvable model in arbitrary dimensions which, in addition to giving a perspective view of spincharge separation, constitutes an explicit example of mutual exclusion statistics in more than two dimensions.
SO3 nonlinear model for a doped quantum helimagnet ; A field theory describing the lowenergy, longwavelength sector of an incommensurate, spiral magnetic phase is derived from a spinfermion model that is commonly used as a microscopic model for hightemperature superconductors. After integrating out the fermions in a pathintegral representation, a gradient expansion of the fermionic determinant is performed. This leads to an O3otimesO2symmetric quantum nonlinear sigma model, where the doping dependence is explicitly given by generalized fermionic susceptibilities which enter into the coupling constants of the sigma model and contain the fermionic bandstructure that results from the spiral background. A stability condition of the field theory selfconsistently determines the spiral wavevector as a function of the doping concentration. Furthermore, terms of topological nature like the thetavacuum term in 11dimensional nonlinear sigma models are obtained for the plane of the spiral.
ZeroTemperature Phase Transitions of Antiferromagnetic Ising Model of General Spin on a Triangular Lattice ; We map the groundstate ensemble of antiferromagnetic Ising model of spinS on a triangular lattice to an interface model whose entropic fluctuations are proposed to be described by an effective Gaussian free energy, which enables us to calculate the critical exponents of various operators in terms of the stiffness constant of the interface. Monte Carlo simulations for the groundstate ensemble utilizing this interfacial representation are performed to study both the dynamical and the static properties of the model. This method yields more accurate numerical results for the critical exponents. By varying the spin magnitude in the model, we find that the model exhibits three phases with a KosterlitzThouless phase transition at 32SKT2 and a locking phase transition at 52 SL leq 3. The phase diagram at finite temperatures is also discussed.
Selforganized criticality as an absorbingstate phase transition ; We explore the connection between selforganized criticality and phase transitions in models with absorbing states. Sandpile models are found to exhibit criticality only when a pair of relevant parameters dissipation epsilon and driving field h are set to their critical values. The critical values of epsilon and h are both equal to zero. The first is due to the absence of saturation no bound on energy in the sandpile model, while the second result is common to other absorbingstate transitions. The original definition of the sandpile model places it at the point epsilon0, h0 it is critical by definition. We argue powerlaw avalanche distributions are a general feature of models with infinitely many absorbing configurations, when they are subject to slow driving at the critical point. Our assertions are supported by simulations of the sandpile at epsilonh0 and fixed energy density no drive, periodic boundaries, and of the slowlydriven pair contact process. We formulate a field theory for the sandpile model, in which the order parameter is coupled to a conserved energy density, which plays the role of an effective creation rate.
Physical Model of Nernst Element ; Generation of electric power by the Nernst effect is a new application of a semiconductor. A key point of this proposal is to find materials with a high thermomagnetic figureofmerit, which are called Nernst elements. In order to find candidates of the Nernst element, a physical model to describe its transport phenomena is needed. As the first model, we began with a parabolic twoband model in classical statistics. According to this model, we selected InSb as candidates of the Nernst element and measured their transport coefficients in magnetic fields up to 4 Tesla within a temperature region from 270K to 330K. In this region, we calculated transport coefficients numerically by our physical model. For InSb, experimental data are coincident with theoretical values in strong magnetic field.
Coarsening and persistence in a class of stochastic processes interpolating between the Ising and voter models ; We study the dynamics of a class of two dimensional stochastic processes, depending on two parameters, which may be interpreted as two different temperatures, respectively associated to interfacial and to bulk noise. Special lines in the plane of parameters correspond to the Ising model, voter model and majority vote model. The dynamics of this class of models may be described formally in terms of reaction diffusion processes for a set of coalescing, annihilating, and branching random walkers. We use the freedom allowed by the space of parameters to measure, by numerical simulations, the persistence probability of a generic model in the low temperature phase, where the system coarsens. This probability is found to decay at large times as a power law with a seemingly constant exponent thetaapprox 0.22. We also discuss the connection between persistence and the nature of the interfaces between domains.
How to handle the inelastic collapse of a dissipative hardsphere gas with the TC model ; The inelastic hard sphere model of granular material is simple, easily accessible to theory and simulation, and captures much of the physics of granular media. It has three drawbacks, all related to the approximation that collisions are instantaneous 1 The number of collisions per unit time can diverge, i.e. the inelastic collapse'' can occur. 2 All interactions are binary, multiparticle contacts cannot occur and 3 no static limit exists. We extend the inelastic hard sphere model by defining a duration of contact tc such that dissipation is allowed only if the time between contacts is larger than tc. We name this generalized model the TC model'' and discuss it using examples of dynamic and static systems. The contact duration used here does not change the instantaneous nature of the hard sphere contacts, but accounts for a reduced dissipation during multiparticle contacts''. Kinetic and elastic energies are defined as well as forces and stresses in the system. Finally, we present eventdriven numerical simulations of situations far beyond the inelastic collapse, possible only with the TC model.
Glassy transition and metastability in fourspin Ising model ; Using Monte Carlo simulations we show that the threedimensional Ising model with fourspin plaquette interactions has some characteristic glassy features. The model dynamically generates diverging energy barriers, which give rise to slow dynamics at low temperature. Moreover, in a certain temperature range the model possesses a metastable supercooled liquid phase, which is presumably supported by certain entropy barriers. Although extremely strong, metastability in our model is only a finitesize effect and sufficiently large droplets of stable phase divert evolution of the system toward the stable phase. Thus, the glassy transitions in this model is a dynamic transition, preceded by a pronounced peak in the specific heat.
Phase diagram for a class of spinhalf Heisenberg models interpolating between the squarelattice, the triangularlattice and the linear chain limits ; We study the spinhalf Heisenberg models on an anisotropic twodimensional lattice which interpolates between the squarelattice at one end, a set of decoupled spinchains on the other end, and the triangularlattice Heisenberg model in between. By series expansions around two different dimer ground states and around various commensurate and incommensurate magnetically ordered states, we establish the phase diagram for this model of a frustrated antiferromagnet. We find a particularly rich phase diagram due to the interplay of magnetic frustration, quantum fluctuations and varying dimensionality. There is a large region of the usual 2sublattice Ne'el phase, a 3sublattice phase for the triangularlattice model, a region of incommensurate magnetic order around the triangularlattice model, and regions in parameter space where there is no magnetic order. We find that the incommensurate ordering wavevector is in general altered from its classical value by quantum fluctuations. The regime of weakly coupled chains is particularly interesting and appears to be nearly critical.
Elastic properties of a tungstensilver composite by reconstruction and computation ; We statistically reconstruct a threedimensional model of a tungstensilver composite from an experimental twodimensional image. The effective Young's modulus E of the model is computed in the temperature range 251060o C using a finite element method. The results are in good agreement with experimental data. As a test case, we have reconstructed the microstructure and computed the moduli of the overlapping sphere model. The reconstructed and overlapping sphere models are examples of bicontinuous nonparticulate media. The computed moduli of the models are not generally in good agreement with the predictions of the selfconsistent method. We have also evaluated threepoint variational bounds on the Young's moduli of the models using the results of Beran, Molyneux, Milton and PhanThien. The measured data were close to the upper bound if the properties of the two phases were similar 16 E1 E2 6.
Magnetic and quantum disordered phases in triangularlattice Heisenberg antiferromagnets ; We study, within the Schwingerboson approach, the groundstate structure of two Heisenberg antiferromagnets on the triangular lattice the J1J2 model, which includes a nextnearestneighbor coupling J2, and the spatiallyanisotropic J1J'1 model, in which the nearestneighbor coupling takes a different value J'1 along one of the bond directions. The motivations for the study of these systems range from general theoretical questions concerning frustrated quantum spin models to the concrete description of the insulating phase of some layered molecular crystals. For both models, the inclusion of oneloop corrections to saddlepoint results leads to the prediction of nonmagnetic phases for particular values of the parameters J1J2 and J'1J1. In the case of the J1J2 model we shed light on the existence of such disordered quantum state, a question which is controversial in the literature. For the J1J'1 model our results for the groundstate energy, quantum renormalization of the pitch in the spiral phase, and the location of the nonmagnetic phases, nicely agree with series expansions predictions.