text
stringlengths
62
2.94k
Invariant representation learning for sequential recommendation ; Sequential recommendation involves automatically recommending the next item to users based on their historical item sequence. While most prior research employs RNN or transformer methods to glean information from the item sequencegenerating probabilities for each useritem pair and recommending the top items, these approaches often overlook the challenge posed by spurious relationships. This paper specifically addresses these spurious relations. We introduce a novel sequential recommendation framework named Irl4Rec. This framework harnesses invariant learning and employs a new objective that factors in the relationship between spurious variables and adjustment variables during model training. This approach aids in identifying spurious relations. Comparative analyses reveal that our framework outperforms three typical methods, underscoring the effectiveness of our model. Moreover, an ablation study further demonstrates the critical role our model plays in detecting spurious relations.
Are we survivors of the sudden past singularity ; In this paper, we investigate the viability of cosmological models featuring a type II singularity that occurs during the past evolution of the Universe. We construct a scenario in which the singularity arises and then constrain the model parameters using observational data from Type Ia Supernovae, Cosmic Chronometers, and Gamma Ray Bursts. We find that the resulting cosmological models based on scenarios with the past type II singularity cannot be excluded by kinematical tests using current observations.
An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading ; We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the outofsample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address nonstationarity of financial data. Our proposed ensemble method improves the outofsample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.
Evaluating Transformer's Ability to Learn Mildly ContextSensitive Languages ; Despite that Transformers perform well in NLP tasks, recent studies suggest that selfattention is theoretically limited in learning even some regular and contextfree languages. These findings motivated us to think about their implications in modeling natural language, which is hypothesized to be mildly contextsensitive. We test Transformer's ability to learn a variety of mildly contextsensitive languages of varying complexities, and find that they generalize well to unseen indistribution data, but their ability to extrapolate to longer strings is worse than that of LSTMs. Our analyses show that the learned selfattention patterns and representations modeled dependency relations and demonstrated counting behavior, which may have helped the models solve the languages.
Logistic modelling of economic dynamics ; We demonstrate the effectiveness of the logistic function to model the evolution of two economic systems. The first is the GDP and trade growth of the USA, and the second is the revenue and human resource growth of IBM. Our modelling is based on the World Bank data in the case of the USA, and on the company data in the case of IBM. The coupled dynamics of the two relevant variables in both systems GDP and trade for the USA, and revenue and human resource for IBM follows a powerlaw behaviour.
Introducing the Cell Unifying GARCH, Stochastic Fluctuations and Evolving Mechanisms in RNNbased Volatility Forecasting ; This paper introduces the sigmaCell, a novel Recurrent Neural Network RNN architecture for financial volatility modeling. Bridging traditional econometric approaches like GARCH with deep learning, the sigmaCell incorporates stochastic layers and timevarying parameters to capture dynamic volatility patterns. Our model serves as a generative network, approximating the conditional distribution of latent variables. We employ a loglikelihoodbased loss function and a specialized activation function to enhance performance. Experimental results demonstrate superior forecasting accuracy compared to traditional GARCH and Stochastic Volatility models, making the next step in integrating domain knowledge with neural networks.
Simplicial Lattice Study of the 2d Ising CFT ; I derive a formulation of the 2dimensional critical Ising model on nonuniform simplicial lattices. Surprisingly, the derivation leads to a set of geometric constraints that a lattice must satisfy in order for the model to have a welldefined continuum limit. I perform Monte Carlo simulations of the critical Ising model on discretizations of several nontrivial manifolds including a twisted torus and a 2sphere and I show that the simulations are in agreement with the 2d Ising CFT in the continuum limit. I discuss the inherent benefits of using nonuniform simplicial lattices to study quantum field theory and how the methods developed here can potentially be generalized for use with other theories.
Dependent censoring with simultaneous death times based on the Generalized MarshallOlkin model ; In this paper, we considered the problem of dependent censoring models with a positive probability that the times of failure are equal. In this context, we proposed to consider the MarshallOlkin type model and studied some properties of the associated survival copula in its application to censored data. We also introduced estimators for the marginal distributions and the joint survival probabilities under different schemes and showed their asymptotic normality under appropriate conditions. Finally, we evaluated the finitesample performance of our approach relying on a small simulation study on synthetic data, and an application to real data.
ClassicalQuantum Hybrid Models ; Hybrid classicalquantum models are computational schemes that investigate the time evolution of systems, where some degrees of freedom are treated classically, while others are described quantummechanically. First, we present the motivation for such models, outline the requirements they must satisfy, and provide explanations for their development. Then we review various popular nonrelativistic schemes and their associated limitations, with a particular emphasis on reversible dynamics.
A Survey of Hallucination in Large Foundation Models ; Hallucination in a foundation model FM refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of hallucination, with a particular focus on Large'' Foundation Models LFMs. The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of hallucination. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs.
Trigonometric Plot of Ising Model ; A novel cellular automaton with update rules reversed with the environment depending on the cell, is frustrated through its von Neumann and Moore neighborhoods and evolved anisotropically. Addition of fine tuning and coupling plots the susceptibility of an Ising model that has five phase transitions, both firstorder and secondorder, and four magnetic phases. This susceptibility model generates a trigonometric plot as an output of the cell evolution, without the use of math libraries or primitives.
Contextaware Adversarial Attack on Named Entity Recognition ; In recent years, large pretrained language models PLMs have achieved remarkable performance on many natural language processing benchmarks. Despite their success, prior studies have shown that PLMs are vulnerable to attacks from adversarial examples. In this work, we focus on the named entity recognition task and study contextaware adversarial attack methods to examine the model's robustness. Specifically, we propose perturbing the most informative words for recognizing entities to create adversarial examples and investigate different candidate replacement methods to generate natural and plausible adversarial examples. Experiments and analyses show that our methods are more effective in deceiving the model into making wrong predictions than strong baselines.
Towards construction of analog solver of Schroedinger and GinzburgLandau equation based on Long Line ; The analog electronic computers are a type of circuitry used to calculate specific problems using the physical relationships between the voltages and currents following classical laws of physics. One specific class of these circuits are computers based on the interactions between passive circuit elements. Models presented by G.Kron in 1945 are the example of using such passive elements to construct a solver for the problem of free quantum particles confined by rectangular potential. Numerical validation of Kron second model is conducted for different shapes of particle confining potential. Model introduced by Kron is generalized by introduction of nonlinear resistive elements what implies deformation of Schrodinger equation solution into GinzburgLandau form.
On Roitman's principles mathsfMH and ; The Model Hypothesis abbreviated mathsfMH and Delta are settheoretic axioms introduced by J. Roitman in her work on the box product problem. Answering some questions of Roitman and Williams on these two principles, we show 1 mathsfMH implies the existence of Ppoints in omega and is therefore not a theorem of mathsfZFC; 2 mathsfMH also fails in the sidebyside Sacks models; 3 as Delta holds in these models, this implies Delta is strictly stronger than mathsfMH; 4 furthermore, Delta holds in a large class of forcing extensions in which it was not previously known to hold.
The Hrightarrow bbars decay and its implication for the vectorlike singlet fermion model ; The decay width Hrightarrow bbars is firstly evaluated at leading order perturbation theory in the standard model. The result suggests that it is difficult to observe this mode because of the small width compared with other decays of the Higgs boson. Then based on the vectorlike singlet model, assuming that the top partner only mixes with the third generation quark, we consider the contribution from coupling of top quark to its vectorlike singlet partner. Further results show that the width of Hrightarrow bbars may rise to an extent to which the LHC experiments can accesses.
Knowledge Sanitization of Large Language Models ; We explore a knowledge sanitization approach to mitigate the privacy concerns associated with large language models LLMs. LLMs trained on a large corpus of Web data can memorize and potentially reveal sensitive or confidential information, raising critical security concerns. Our technique finetunes these models, prompting them to generate harmless responses such as I don't know'' when queried about specific information. Experimental results in a closedbook questionanswering task show that our straightforward method not only minimizes particular knowledge leakage but also preserves the overall performance of LLM. These two advantages strengthen the defense against extraction attacks and reduces the emission of harmful content such as hallucinations.
A New cryptanalysis model based on random and quantum walks ; Randomness plays a key role in the design of attacks on cryptographic systems and cyber security algorithms in general. Random walks and quantum walks are powerful tools for mastering random phenomena. In this article, I propose a probabilistic attack model of a cryptographic system. This model is based on a random or quantum walk with the space of states being a space containing the secret to be revealed. This space can be a subspace of keys, plain texts or cipher texts.
Simple PowerLaw Model for generating correlated particles ; A search for the critical point of the strongly interacting matter by studying powerlaw fluctuations within the framework of intermittency is ongoing. In particular, experimental data on proton and pion production in heavyion collisions are analyzed in transverse momentum space. In this regard, a simple model with a powerlaw multiparticle correlations is introduced. The model can be used to study sensitivity to detect powerlaw correlated particles in the presence of various detector effects.
Forecasting large collections of time series featurebased methods ; In economics and many other forecasting domains, the real world problems are too complex for a single model that assumes a specific data generation process. The forecasting performance of different methods changes depending on the nature of the time series. When forecasting large collections of time series, two lines of approaches have been developed using time series features, namely featurebased model selection and featurebased model combination. This chapter discusses the stateoftheart featurebased methods, with reference to opensource software implementations.
The Possibility of Tachyons as Soliton Solutions in 31 Dimensions via kfield Models ; Tachyons or hypothetical fasterthanlight FTL particles would fail the principle of causality. Such particles may only be imagined when they have no energy and momentum and, thus, no observable interaction. In this paper, we show that the classical relativistic field theory with nonstandard Lagrangian densities kfield models does not rule out the existence of FTL particlelike solutions solitons with zero energy and momentum in 31 dimensions. However, contrary to our expectation, we show that it is possible to have kfield models that lead to energetic FTL soliton solutions in 31 dimensions.
Models for irreducible representations of the symplectic algebra using Diractype operators ; In this paper we will study both the finite and infinitedimensional representations of the symplectic Lie algebra mathfraksp2n and develop a polynomial model for these representations. This means that we will associate a certain space of homogeneous polynomials in a matrix variable, intersected with the kernel of mathfraksp2ninvariant differential operators related to the symplectic Dirac operator with every irreducible representation of mathfraksp2n. We will show that the systems of symplectic Dirac operators can be seen as generators of parafermion algebras. As an application of these new models, we construct a symplectic analogue of the RaritaSchwinger operator using the theory of transvector algebras.
Geometric phase for nonlinear oscillators from perturbative renormalization group ; We formulate a renormalization group approach to a general nonlinear oscillator problem. The approach is based on the exact group law obeyed by solutions of the corresponding ordinary differential equation. We consider both the autonomous models with timeindependent parameters, as well as nonautonomous models with slowly varying parameters. We show that the renormalization group equations for the nonautonomous case can be used to determine the geometric phase acquired by the oscillator during the change of its parameters. We illustrate the obtained results by applying them to the Van der Pol, and Van der PolDuffing models.
Equivalence of 1loop RG flows in 4d ChernSimons and integrable 2d sigmamodels ; We argue for the matching of 1loop divergences between 4d ChernSimons theory with Disorder defects and the corresponding integrable 2d sigmamodels of nonultralocal type. Starting from the 4d path integral, we show under general assumptions that the 1loop divergences localise to the 2d defects. They match the 'universal' formulae developed in arXiv2209.05502 for the 1loop RG flows of integrable 2d sigmamodels. Our argument uses an alternate path integral representation for the 1loop effective action and the known classical equivalence between the 4d and 2d theories.
Comparative Analysis of Named Entity Recognition in the Dungeons and Dragons Domain ; Many NLP tasks, although wellresolved for general English, face challenges in specific domains like fantasy literature. This is evident in Named Entity Recognition NER, which detects and categorizes entities in text. We analyzed 10 NER models on 7 Dungeons and Dragons DD adventure books to assess domainspecific performance. Using opensource Large Language Models, we annotated named entities in these books and evaluated each model's precision. Our findings indicate that, without modifications, Flair, Trankit, and Spacy outperform others in identifying named entities in the DD context.
Discrete Choice MultiArmed Bandits ; This paper establishes a connection between a category of discrete choice models and the realms of online learning and multiarmed bandit algorithms. Our contributions can be summarized in two key aspects. Firstly, we furnish sublinear regret bounds for a comprehensive family of algorithms, encompassing the Exp3 algorithm as a particular case. Secondly, we introduce a novel family of adversarial multiarmed bandit algorithms, drawing inspiration from the generalized nested logit models initially introduced by citetwen2001. These algorithms offer users the flexibility to finetune the model extensively, as they can be implemented efficiently due to their closedform sampling distribution probabilities. To demonstrate the practical implementation of our algorithms, we present numerical experiments, focusing on the stochastic bandit case.
Standard Model Building from Intersecting DBranes ; We provide a general overview of the current state of the art in four dimensional three generation model building proposals using intersecting Dbrane toroidal compactifications without fluxes of IIA, IIB string theories which have only the SM at low energy. In this context, we focus on these model building directions, where nonsupersymmetric constructions based on the existence of the gauge group structure SU3c times SU2L times U1Y, PatiSalam SU4C times SU2L times SU2R, SU5 and flipped SU5 GUTS appear at the string scale Ms. These model building attempts are based on four dimensional compactifications that use orientifolds of either IIA theory with D6branes wrapping on T6, T6Z3 and recently on T6Z3 times Z3 or of IIB theory with D5branes wrapping on T4 times CZN. Models with D5branes are compatible with the large extra dimension scenario and a low string scale that could be at the TeV; thus there is no gauge hierarchy problem in the Higgs sector. In the case of flipped SU5 GUTS coming from T6Z3 the special build up structure of the models accommodates naturally a seesaw mechanism and a new solution to the doublettriplet splitting problem. Baryon number is a gauged symmetry and thus proton is naturally stable only in models with D5 branes or in models with D6branes wrapping toroidal orientifolds of type IIA. Finally, we present new RR tadpole solutions for the 5 and 6 stack toroidal orientifold models of type IIA which have only the Standard Model with right handed neutrinos at low energy.
Analysis of a Class of Likelihood Based Continuous Time Stochastic Volatility Models including OrnsteinUhlenbeck Models in Financial Economics ; In a series of recent papers BarndorffNielsen and Shephard introduce an attractive class of continuous time stochastic volatility models for financial assets where the volatility processes are functions of positive OrnsteinUhlenbeckOU processes. This models are known to be substantially more flexible than Gaussian based models. One current problem of this approach is the unavailability of a tractable exact analysis of likelihood based stochastic volatility models for the returns of log prices of stocks. With this point in mind, the likelihood models of BarndorffNielsen and Shephard are viewed as members of a much larger class of models. That is likelihoods based on n conditionally independent Normal random variables whose mean and variance are representable as linear functionals of a common unobserved Poisson random measure. The analysis of these models is facilitated by applying the methods in James 2005, 2002, in particular an Esscher type transform of Poisson random measures; in conjunction with a special case of the WeberSonine formula. It is shown that the marginal likelihood may be expressed in terms of a multidimensional Fouriercosine transform. This yields tractable forms of the likelihood and also allows a full Bayesian posterior analysis of the integrated volatility process. A general formula for the posterior density of the log price given the observed data is derived, which could potentially have applications to option pricing. We extend the models to include leverage effects in section 5. It is shown that inference does not necessarily require simulation of random measures. Rather, classical numerical integration can be used in the most general cases.
Applying MDL to Learning Best Model Granularity ; The Minimum Description Length MDL principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a twopart code of the data set this embodies Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter length of sampling interval considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feedforward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model the one that extrapolizes best on unseen examples is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.
Species abundance distributions in neutral models with immigration or mutation and general lifetimes ; We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate lambda. We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate mu e.g., from the mainland into the island. In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate theta. We are interested in the species abundance distribution, i.e., in the numbers, denoted Ink in the immigration model and Ank in the mutation model, of species represented by k individuals, k1,2,...,n, when there are n individuals in the total population. In the immigration model, we prove that the numbers Itk;kge 1 of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's logseries. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, Ink converges as ntoinfty to a Poisson r.v. with mean gamma k, where gammamulambda. In the mutation model, as ntoinfty, we obtain the almost sure convergence of n1Ank to a nonrandom explicit constant. In the case of a critical, linear birthdeath process, this constant is given by Fisher's logseries, namely n1Ank converges to alphakk, where alpha lambdalambdatheta. In both models, the abundances of the most abundant species are briefly discussed.
Online Verification of Control Parameter Calculations in Communication Based Train Control System ; Communication Based Train Control CBTC system is the stateoftheart train control system. In a CBTC system, to guarantee the safety of train operation, trains communicate with each other intensively and adjust their control modes autonomously by computing critical control parameters, e.g. velocity range, according to the information they get. As the correctness of the control parameters generated are critical to the safety of the system, a method to verify these parameters is a strong desire in the area of train control system. In this paper, we present our ideas of how to model and verify the control parameter calculations in a CBTC system efficiently. As the behavior of the system is highly nondeterministic, it is difficult to build and verify the complete behavior space model of the system online in advance. Thus, we propose to model the system according to the ongoing behavior model induced by the control parameters. As the parameters are generated online and updated very quickly, the verification result will be meaningless if it is given beyond the time bound, since by that time the model will be changed already. Thus, we propose a method to verify the existence of certain dangerous scenarios in the model online quickly. To demonstrate the feasibility of these proposed approaches, we present the composed linear hybrid automata with readable shared variables as a modeling language to model the control parameters calculation and give a pathoriented reachability analysis technique for the scenariobased verification of this model. We demonstrate the model built for the CBTC system, and show the performance of our technique in fast online verification. Last but not least, as CBTC system is a typical CPS system, we also give a short discussion of the potential directions for CPS verification in this paper.
Automated derivation of the adjoint of highlevel transient finite element programs ; In this paper we demonstrate a new technique for deriving discrete adjoint and tangent linear models of finite element models. The technique is significantly more efficient and automatic than standard algorithmic differentiation techniques. The approach relies on a highlevel symbolic representation of the forward problem. In contrast to developing a model directly in Fortran or C, highlevel systems allow the developer to express the variational problems to be solved in nearmathematical notation. As such, these systems have a key advantage since the mathematical structure of the problem is preserved, they are more amenable to automated analysis and manipulation. The framework introduced here is implemented in a freely available software package named dolfinadjoint, based on the FEniCS Project. Our approach to automated adjoint derivation relies on runtime annotation of the temporal structure of the model, and employs the FEniCS finite element form compiler to automatically generate the lowlevel code for the derived models. The approach requires only trivial changes to a large class of forward models, including complicated timedependent nonlinear models. The adjoint model automatically employs optimal checkpointing schemes to mitigate storage requirements for nonlinear models, without any user management or intervention. Furthermore, both the tangent linear and adjoint models naturally work in parallel, without any need to differentiate through calls to MPI or to parse OpenMP directives. The generality, applicability and efficiency of the approach are demonstrated with examples from a wide range of scientific applications.
Comparison of Strong Gravitational Lens Model Software I. Time delay and mass calculations are sensitive to changes in redshift and are model dependent ; Analysis of strong gravitational lensing depends on software analysis of observational data. The purpose of this study was to evaluate the behavior of strong gravitational lens modeling software with changes in redshift. Four different strong gravitational lens software modeling codes were directly compared Lenstool glafic, two light traces mass codes, and GRALE PixeLens, two nonlight traces mass codes in the analysis of model data as well as analysis of the giant gravitational quasar SDSSJ10044112. A generalized model for time delay calculation shows that calculated time delay is proportional to DdDsDds.The percent change in time delays calculated for each system at each redshift tested were compared with percent change in the value of DdDsDds. A simple point mass model was tested with each code. Five models were used with a constant zlens and a varying zsource, and five models with a constant zsource and a varying zlens. The effects of changing geometry were similarly investigated for SDSSJ10044112. In general, the changes in time delay were of a similar magnitude and direction although, some calculated time delays varied by as much as 30 percent from changes in DdDsDds. Changes in calculated mass for the point mass model with a constant zsource were almost identical to changes in DdDsDds for three of the four codes tested. These data demonstrate the effect of changes in redshift on parameters calculated by each of the codes as compared to changes in DdDsDds. The paucity of existing direct comparison studies of strong gravitational lensing supports the need for more studies of this kind. These results show that even small changes in redshift affects the calculation of time delay and mass, and that the effect on the calculations is dependent on the particular software used.
PostPlanck Dark Energy Constraints ; We constrain plausible dark energy models, parametrized by multiple candidate equation of state, using the recently published Cosmic Microwave Background CMB temperature anisotropy data from Planck together with the WMAP9 lowell polarization data and data from low redshift surveys. To circumvent the limitations of any particular equation of state towards describing all existing dark energy models, we work with three different equation of state covering a broader class of dark energy models and, hence, provide more robust and generic constraints on the dark energy properties. We show that a clear tension exists between dark energy constraints from CMB and nonCMB observations when one allows for dark energy models having both phantom and nonphantom behavior; while CMB is more favorable to phantom models, the lowz data prefers model with behavior close to a Cosmological Constant. Further, we reconstruct the equation of state of dark energy as a function of redshift using the results from combined CMB and nonCMB data and find that Cosmological Constant lies outside the 1sigma band for multiple dark energy models allowing phantom behavior. A considerable fine tuning is needed to keep models with strict nonphantom history inside 2sigma allowed range. This result might motivate one to construct phantom models of dark energy,which is achievable in the presence of higher derivative operators as in string theory. However, disallowing phantom behavior, based only on strong theoretical prior, leads to both CMB and nonCMB datasets agree on the nature of dark energy, with the mean equation of state being very close to the Cosmological Constant. Finally, to illustrate the impact of additional dark energy parameters on other cosmological parameters, we provide the cosmological parameter constraints for different dark energy models.
Uniqueness of gradient Gibbs measures with disorder ; We consider in uniformly strictly convex potential regime two versions of random gradient models with disorder. In model A the interface feels a bulk term of random fields while in model B the disorder enters though the potential acting on the gradients. We assume a general distribution on the disorder with uniformlybounded finite second moments. It is well known that for gradient models without disorder there are no Gibbs measures in infinitevolume in dimension d 2, while there are shiftinvariant gradient Gibbs measures describing an infinitevolume distribution for the gradients of the field, as was shown by Funaki and Spohn. Van Enter and Kuelske proved in 2008 that adding a disorder term as in model A prohibits the existence of such gradient Gibbs measures for general interaction potentials in d 2. In Cotar and Kuelske 2012 we proved the existence of shiftcovariant random gradient Gibbs measures for model A when dgeq 3, the disorder is i.i.d and has mean zero, and for model B when dgeq 1 and the disorder has stationary distribution. In the present paper, we prove existence and uniqueness of shiftcovariant random gradient Gibbs measures with a given expected tilt uin Rd and with the corresponding annealed measure being ergodic for model A when dgeq 3 and the disordered random fields are i.i.d. and symmetricallydistributed, and for model B when dgeq 1 and for any stationary disorder dependence structure. We also compute for both models for any gradient Gibbs measure constructed as in Cotar and Kuelske 2012, when the disorder is i.i.d. and its distribution satisfies a Poincar'e inequality assumption, the optimal decay of covariances with respect to the averagedoverthedisorder gradient Gibbs measure.
Space group symmetry fractionalization in a family of exactly solvable models with Z2 topological order ; We study square lattice space group symmetry fractionalization in a family of exactly solvable models with mathbbZ2 topological order in two dimensions. In particular, we have obtained a complete understanding of which distinct types of symmetry fractionalization symmetry classes can be realized within this class of models, which are generalizations of Kitaev's mathbbZ2 toric code to arbitrary lattices. This question is motivated by earlier work of A. M. Essin and one of us M. H., where the idea of symmetry classification was laid out, and which, for square lattice symmetry, produces 2080 symmetry classes consistent with the fusion rules of mathbbZ2 topological order. This approach does not produce a physical model for each symmetry class, and indeed there are reasons to believe that some symmetry classes may not be realizable in strictly twodimensional systems, thus raising the question of which classes are in fact possible. While our understanding is limited to a restricted class of models, it is complete in the sense that for each of the 2080 possible symmetry classes, we either prove rigorously that the class cannot be realized in our family of models, or we give an explicit model realizing the class. We thus find that exactly 487 symmetry classes are realized in the family of models considered. With a more restrictive type of symmetry action, where space group operations act trivially in the internal Hilbert space of each spin degree of freedom, we find that exactly 82 symmetry classes are realized. In addition, we present a single model that realizes all 26 64 types of symmetry fractionalization allowed for a single anyon species mathbbZ2 charge excitation, as the parameters in the Hamiltonian are varied. The paper concludes with a summary and a discussion of two results pertaining to more general bosonic models.
The Atmospheric Circulation of the Hot Jupiter WASP43b Comparing ThreeDimensional Models to Spectrophotometric Data ; The hot Jupiter WASP43b has now joined the ranks of transiting hot Jupiters HD 189733b and HD 209458b as an exoplanet with a large array of observational constraints on its atmospheric properties. Because WASP43b receives a similar stellar flux as HD 209458b but has a rotation rate 4 times faster and a much higher gravity, studying WASP43b serves as a test of the effect of rotation rate and gravity on the circulation when stellar irradiation is held approximately constant. Here we present 3D atmospheric circulation models of WASP43b using the SPARCMITgcm, a coupled radiation and circulation model, exploring the effects of composition, metallicity, and frictional drag. We find that the circulation regime of WASP43b is not unlike other hot Jupiters, with equatorial superrotation that yields an eastwardshifted hotspot and large daynight temperature variations 600 K at photospheric pressures. We then compare our model results to observations from Stevenson et al. which utilize HSTWFC3 to collect spectrophotometric phase curve measurements of WASP43b from 1.121.65 microns. Our results show the 5x solar model lightcurve provides a good match to the data, with a phase offset of peak flux and planetstar flux ratio that is similar to observations; however, the model nightside appears to be brighter. Nevertheless, our 5x solar model provides an excellent match to the WFC3 dayside emission spectrum. This is a major success, as the result is a natural outcome of the 3D dynamics with no model tuning, and differs significantly from 1D models that can generally only match observations when appropriately tuned. In sum, these results demonstrate that 3D circulation models can provide important insights in interpreting exoplanet atmospheric observations, even at high spectral resolution, and highlight the potential for future observations with HST, JWST and other nextgeneration telescopes.
Seasonal cycle of Precipitation over Major River Basins in South and Southeast Asia A Review of the CMIP5 climate models data for present climate and future climate projections ; We review the skill of thirty coupled climate models participating in Coupled Model Intercomparison Project 5 in terms of reproducing properties of the seasonal cycle of precipitation over the major river basins of South and Southeast Asia Indus, Ganges, Brahmaputra and Mekong for historical period 19612000. We also present projected changes by these models by end of century 20612100 under extreme scenario RCP8.5. First, we assess their ability to reproduce observed timings of the monsoon onset and the rate of rapid fractional accumulation RFA slope a measure of seasonality within active monsoon period. Secondly, we apply a thresholdindependent seasonality index SI a multiplicative measure of precipitation and extent of its concentration relative to the uniform distribution relative entropy RE. We apply SI distinctly for monsoonal precipitation regime MPR, westerly precipitation regime WPR and annual precipitation regime. For present climate, neither any single model nor the multimodel mean performs best in all chosen metrics. Models show overall a modest skill in suggesting right timings of the monsoon onset while the RFA slope is generally underestimated. One third of models fail to capture the monsoon signal over the Indus basin. Mostly, SI estimates for WPR are simulated higher than observed for all basins, while for MPR, it is simulated higher lower for Ganges and Brahmaputra Indus and Mekong basins, following the pattern of overestimation underestimation of precipitation. However, models are biased positive negative for RE estimates over Indus and Ganges Brahmaputra and Mekong basins, implying the extent of precipitation concentration for MPR and number of dry days within WPR higher lower than observed for these basins. Under the RCP8.5 scenario, most of models project a slightly delayed monsoon onset, and a general increase in the RFA slope...
Predicate Pairing for Program Verification ; It is wellknown that the verification of partial correctness properties of imperative programs can be reduced to the satisfiability problem for constrained Horn clauses CHCs. However, stateoftheart solvers for CHCs CHC solvers based on predicate abstraction are sometimes unable to verify satisfiability because they look for models that are definable in a given class A of constraints, called Adefinable models. We introduce a transformation technique, called Predicate Pairing PP, which is able, in many interesting cases, to transform a set of clauses into an equisatisfiable set whose satisfiability can be proved by finding an Adefinable model, and hence can be effectively verified by CHC solvers. We prove that, under very general conditions on A, the unfoldfold transformation rules preserve the existence of an Adefinable model, i.e., if the original clauses have an Adefinable model, then the transformed clauses have an Adefinable model. The converse does not hold in general, and we provide suitable conditions under which the transformed clauses have an Adefinable model iff the original ones have an Adefinable model. Then, we present the PP strategy which guides the application of the transformation rules with the objective of deriving a set of clauses whose satisfiability can be proved by looking for Adefinable models. PP introduces a new predicate defined by the conjunction of two predicates together with some constraints. We show through some examples that an Adefinable model may exist for the new predicate even if it does not exist for its defining atomic conjuncts. We also present some case studies showing that PP plays a crucial role in the verification of relational properties of programs e.g., program equivalence and noninterference. Finally, we perform an experimental evaluation to assess the effectiveness of PP in increasing the power of CHC solving.
Multiwavelength torusjet model for SgrA ; Context. The properties of the accretion flow surrounding the supermassive central black hole of the Galaxy, Sgr A, will be scrutinized by the newgeneration instrument GRAVITY and the Event Horizon Telescope EHT. Developing fast, robust, and simple models of such flows is thus important and very timely. Aims. We want to model the quiescent emission of Sgr A from radio to midinfrared by considering a magnetized compact torus and an extended jet. Results. We find perfect spectral fit both for faceon and edgeon views. These best fits give parameters values very close to that found by the most recent numerical simulations, which are much more complex than our model. The intrinsic radio size of Sgr A is found to be in reasonable agreement with the centimetric observed constraints. Our bestfit infrared spectral index is in perfect agreement with the latest constraints. Our emission region at 1.3 mm, although larger than the Doeleman et al. 2008 Gaussian bestfit, does contain bright features at the 40 microarcsec scale. EHTreconstructed images show that torusjetspecific features persist after the reconstruction procedure, and that these features are sensitive to inclination. Conclusions. The main interest of our model is to give a simple and fast model of the quiescent state of Sgr A, which gives extremely similar results as compared to stateoftheart numerical simulations. Our model is easy to use and we publish all the material necessary to reproduce our spectra and images, so that anyone interested can use our results rather straightforwardly. We hope that such a public tool can be useful in the context of the recent and nearfuture GRAVITY and EHT results. Our model can in particular be easily used to test alternative compact objects models, or alternative gravity theories. The main limitation of our model is that we do not yet treat the Xray emission.
Recovering stellar population parameters via different population models and stellar libraries ; Three basic ingredients are required to generate a simple stellar population SSP library, i.e., an initial mass function IMF, a stellar evolution modelisochrones, and an empiricaltheoretical stellar spectral library. However, there are still some uncertainties to the determination and understanding of these ingredients. We perform the spectral fitting to test the relative parameter offsets between these uncertainties using two different stellar population models, two different empirical stellar libraries, two different isochrones, and the Salpeter and Chabrier IMFs. Based on these setups, we select five SSP libraries generated with the GalaxevSTELIB and VazdekisMILES models, and apply them to the pPXF fullspectrum fitting of both MaNGA and mock spectra. We find that 1 Compared to the GalaxevSTELIB model, spectral fitting qualities with the VazdekisMILES model have significant improvements for those metalrich especially oversolar spectra, which cause better reduced chi2 distributions and more precisely fitted absorption lines. This might due to the lack of metal rich stars in the empirical STELIB library, or code improvement of the Vazdekis model. 2 When applying the VazdekisMILES model for spectral fitting, the IMF variation will lead to not only a systematic offset in MLr, but also offsets in age and metallicity, and these offsets increase with increasing stellar population ages. However, the IMFvariation caused metallicity offsets disappear in the case of GalaxevSTELIB based libraries. 3 The Padova2000 model provides a better match to the MaNGA galaxy spectra at MHL1.0, while the BaSTI model match the local galaxy spectra better at MHL1.0. Current tests suggest that spectral fitting with the VazdekisMILESBaSTI combination would be a better choice for local galaxies.
Modeling nonthermal emission from the jetlaunching region of M 87 with adaptive mesh refinement ; The galaxy M 87 harbors a kiloparsecscale relativistic jet, whose origin coincides with a supermassive black hole. Observational mmVLBI campaigns are capable of resolving the jetlaunching region at the scale of the event horizon. In order to provide a context for interpreting these observations, realistic generalrelativistic magnetohydrodynamical GRMHD models of the accretion flow are constructed. The characteristics of the observed spectralenergy distribution SED depend on the shape of the electrons' energydistribution function eDF. The dependency on the eDF is omitted in the modeling of the first Event Horizon Telescope results. In this work, we aim to model the M 87 SED from radio up to NIRoptical frequencies using a thermalrelativistic Maxwell Juttner distribution, as well as a relativistic kappadistribution function. The electrons are injected based on subgrid, particleincell parametrizations for subrelativistic reconnection. A GRMHD simulation in CartesianKerrSchild coordinates, using eight levels of adaptive mesh refinement AMR, forms the basis of our model. To obtain spectra and images, the GRMHD data is postprocessed with the raytracing code RAPTOR, which is capable of ray tracing through AMR GRMHD simulation data. We obtain radio spectra in both the thermaljet and kappajet models consistent with radio observations. Additionally, the kappajet models also recover the NIRoptical emission. The models recover the observed source sizes and core shifts and obtain a jet power of approx 1043 ergss. In the kappajet models, both the accretion rates and jet powers are approximately two times lower than the thermaljet model. The frequency cutoff observed at nu approx 1015 Hz is recovered when the accelerator size is 106 108 cm, this could potentially point to an upper limit for plasmoid sizes in the jet of M 87.
Multifidelity regression using artificial neural networks efficient approximation of parameterdependent output quantities ; Highly accurate numerical or physical experiments are often timeconsuming or expensive to obtain. When time or budget restrictions prohibit the generation of additional data, the amount of available samples may be too limited to provide satisfactory model results. Multifidelity methods deal with such problems by incorporating information from other sources, which are ideally wellcorrelated with the highfidelity data, but can be obtained at a lower cost. By leveraging correlations between different data sets, multifidelity methods often yield superior generalization when compared to models based solely on a small amount of highfidelity data. In this work, we present the use of artificial neural networks applied to multifidelity regression problems. By elaborating a few existing approaches, we propose new neural network architectures for multifidelity regression. The introduced models are compared against a traditional multifidelity scheme, cokriging. A collection of artificial benchmarks are presented to measure the performance of the analyzed models. The results show that crossvalidation in combination with Bayesian optimization consistently leads to neural network models that outperform the cokriging scheme. Additionally, we show an application of multifidelity regression to an engineering problem. The propagation of a pressure wave into an acoustic horn with parametrized shape and frequency is considered, and the index of reflection intensity is approximated using the multifidelity models. A finite element model and a reduced basis model are adopted as the high and lowfidelity, respectively. It is shown that the multifidelity neural network returns outputs that achieve a comparable accuracy to those from the expensive, fullorder model, using only very few fullorder evaluations combined with a larger amount of inaccurate but cheap evaluations of a reduced order model.
Datadriven Modeling of the Mechanical Behavior of Anisotropic Soft Biological Tissue ; Constitutive models that describe the mechanical behavior of soft tissues have advanced greatly over the past few decades. These expert models are generalizable and require the calibration of a number of parameters to fit experimental data. However, inherent pitfalls stemming from the restriction to a specific functional form include poor fits to the data, nonuniqueness of fit, and high sensitivity to parameters. In this study we design and train fully connected neural networks as material models to replace or augment expert models. To guarantee objectivity, the neural network takes isochoric strain invariants as inputs, and outputs the value of strain energy and its derivatives with respect to the invariants. Convexity of the material model is enforced through the loss function. Direct prediction of the derivative functions rather than just predicting the energy serves two purposes it provides flexibility during training, and it enables the calculation of the elasticity tensor through backpropagation. We showcase the ability of the neural network to learn the mechanical behavior of porcine and murine skin from biaxial test data. Crucially, we show that a multifidelity scheme which combines high fidelity experimental data with low fidelity analytical data yields the best performance. The neural network material model can then be interpreted as the best extension of an expert model it learns the features that an expert has encoded in the analytical model while fitting the experimental data better. Finally, we implemented a general user material subroutine UMAT for the finite element software Abaqus and thereby make our advances available to the broader computational community. We expect that the methods and software generated in this work will broaden the use of datadriven constitutive models in biomedical applications.
Confronting 2D delayeddetonation models with light curves and spectra of Type Ia supernovae ; We compare models for Type Ia supernova SN Ia light curves and spectra with an extensive set of observations. The models come from a recent survey of 44 twodimensional delayeddetonation models computed by Kasen, Roepke Woosley 2009, each viewed from multiple directions. The data include optical light curves of 251 SNe Ia and 2231 lowdispersion spectra from the Center for Astrophysics, plus data from the literature. The analysis uses standard techniques employed by observers, including MLCS2k2, SALT2, and SNooPy for lightcurve analysis, and the Supernova Identification SNID code of Blondin Tonry for spectroscopic comparisons to assess how well the models match the data. We show that the models that match observed spectra best lie systematically on the observed widthluminosity relation. Conversely, we reject six models with highly asymmetric ignition conditions and a large amount 1 Msun of synthesized 56Ni that yield poor matches to observed SN Ia spectra. More subtle features of the comparison include the general difficulty of the models to match the Uband flux at early times, caused by a hot ionized ejecta that affect the subsequent redistribution of flux at longer wavelengths. We examine ways in which the asymptotic kinetic energy of the explosion affects both the predicted velocity and velocity gradient in the Si II and Ca II lines. Models with an asymmetric distribution of 56Ni are found to result in a larger variation of photometric and spectroscopic properties with viewing angle, regardless of the initial ignition setup. We discuss more generally whether highly anisotropic ignition conditions are ruled out by observations, and how detailed comparisons between models and observations involving both light curves and spectra can lead to a better understanding of SN Ia explosion mechanisms.
Nonlinear spherical perturbations in Quintessence Models of Dark Energy ; Observations have confirmed the accelerated expansion of the universe. The accelerated expansion can be modelled by invoking a cosmological constant or a dynamical model of dark energy. A key difference between these models is that the equation of state parameter w for dark energy differs from 1 in dynamical dark energy DDE models. Further, the equation of state parameter is not constant for a general DDE model. Such differences can be probed using the variation of scale factor with time by measuring distances. Another significant difference between the cosmological constant and DDE models is that the latter must cluster. Linear perturbation analysis indicates that perturbations in quintessence models of dark energy do not grow to have a significant amplitude at small length scales. In this paper we study the response of quintessence dark energy to nonlinear perturbations in dark matter. We use a fully relativistic model for spherically symmetric perturbations. In this study we focus on thawing models. We find that in response to nonlinear perturbations in dark matter, dark energy perturbations grow at a faster rate than expected in linear perturbation theory. We find that dark energy perturbation remains localised and does not diffuse out to larger scales. The dominant drivers of the evolution of dark energy perturbations are the local Hubble flow and a supression of gradients of the scalar field. We also find that the equation of state parameter w changes in response to perturbations in dark matter such that it also becomes a function of position. The variation of w in space is correlated with density contrast for matter. Variation of w and perturbations in dark energy are more pronounced in response to large scale perturbations in matter while the dependence on the amplitude of matter perturbations is much weaker.
Isotropic nonGaussian gmathrmNLlike toy models that reproduce the cosmic microwave background anomalies ; Based on recent observations of the cosmic microwave background CMB, claims of statistical anomalies in the properties of the CMB fluctuations have been made. Although the statistical significance of the anomalies remains only at the sim23sigma significance level, the fact that there are many different anomalies, several of which support a possible deviation from statistical isotropy, has motivated a search for models that provide a common mechanism to generate them. The goal of this paper is to investigate whether these anomalies could originate from nonGaussian cosmological models, and to determine what properties these models should have. We present a simple isotropic, nonGaussian class of toy models that can reproduce six of the most extensively studied anomalies. We compare the presence of anomalies found in simulated maps generated from the toy models and from a standard model with Gaussian fluctuations. We show that the following anomalies, as found in the Planck data, commonly occur in the toy model maps 1 largescale hemispherical asymmetry largescale dipolar modulation, 2 smallscale hemispherical asymmetry alignment of the spatial distribution of CMB power over all scales ell2,1500 , 3 a strongly nonGaussian hot or cold spot, 4 a low power spectrum amplitude for ell30, including specifically 5 a low quadrupole and an unusual alignment between the quadrupole and the octopole, and 6 parity asymmetry of the lowest multipoles. We note that this class of toy model resembles models of primordial nonGaussianity characterised by strongly scaledependent gNLlike trispectra.
Effect of nonequilibrium ionization on derived physical conditions of the highz intergalactic medium ; Nonequilibrium ionization effects are important in cosmological hydrodynamical simulations but are computationally expensive. We study the effect of nonequilibrium ionization evolution and UV ionizing background UVB generated with different quasar spectral energy distribution SED on the derived physical conditions of the intergalactic medium IGM at 2leq z leq 6 using our postprocessing tool 'Code for Ionization and Temperature Evolution' CITE. CITE produces results matching well with selfconsistent simulations more efficiently. The HeII reionization progresses more rapidly in nonequilibrium model as compared to equilibrium models. The redshift of HeII reionization strongly depends on the quasar SED and occurs earlier for UVB models with flatter quasar SEDs. During this epoch the normalization of temperaturedensity relation, T0z, has a maximum while the slope, gammaz, has a minimum, but occurring at different redshifts. The T0 is higher in nonequilibrium models using UVB obtained with flatter quasar SEDs. While our models produce the observed median HeII effective optical depth evolution and its scatter for equilibrium and nonequilibrium considerations, to explain the observed cumulative distributions we may need to consider fluctuating UVB. For a given UVB model, the redshift dependence of the HI photoionization rate derived from the observed HI effective optical depth taurm eff,HI for the equilibrium model is different from that for the nonequilibrium model. This may lead to different requirements on the evolution of ionizing emissivities of sources. We show that, in the absence of strong differential pressure smoothing effects, it is possible to recover the T0 and gamma realised in nonequilibrium model from the equilibrium models generated by rescaling photoheating rates while producing the same taurm eff,HI.
Toward a robust inference method for the galaxy bispectrum likelihood function and model selection ; The forthcoming generation of galaxy redshift surveys will sample the largescale structure of the Universe over unprecedented volumes with highdensity tracers. This advancement will make robust measurements of threepoint clustering statistics possible. In preparation for this improvement, we investigate how several methodological choices can influence inferences based on the bispectrum about galaxy bias and shot noise. We first measure the realspace bispectrum of darkmatter haloes extracted from 298 Nbody simulations covering a volume of approximately 1000 h3 mathrmGpc3. We then fit a series of theoretical models based on treelevel perturbation theory to the numerical data. To achieve this, we estimate the covariance matrix of the measurement errors by using 10,000 mock catalogues generated with the Pinocchio code. We study how the model constraints are influenced by the binning strategy for the bispectrum configurations and by the form of the likelihood function. We also use Bayesian modelselection techniques to single out the optimal theoretical description of our data. We find that a threeparameter bias model combined with Poissonian shot noise is necessary to model the halo bispectrum up to scales of kmathrmmaxlesssim 0.08 h mathrmMpc1, although fitting formulae that relate the bias parameters can be helpful to reduce the freedom of the model without compromising accuracy. Our data clearly disfavour local Eulerian and local Lagrangian bias models and do not require corrections to Poissonian shot noise. We anticipate that modelselection diagnostics will be particularly useful to extend the analysis to smaller scales as, in this case, the number of model parameters will grow significantly large.
Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models ; We study identity testing for restricted Boltzmann machines RBMs, and more generally for undirected graphical models. Given sample access to the Gibbs distribution corresponding to an unknown or hidden model M and given an explicit model M, can we distinguish if either M M or if they are statistically far apart Daskalakis et al. 2018 presented a polynomialtime algorithm for identity testing for the ferromagnetic attractive Ising model. In contrast, for the antiferromagnetic repulsive Ising model, Bez'akov'a et al. 2019 proved that unless RPNP there is no identity testing algorithm when beta domegalogn, where d is the maximum degree of the visible graph and beta is the largest edge weight in absolute value. We prove analogous hardness results for RBMs i.e., mixed Ising models on bipartite graphs, even when there are no latent variables or an external field. Specifically, we show that if RP neq NP, then when beta domegalogn there is no polynomialtime algorithm for identity testing for RBMs; when beta d Ologn there is an efficient identity testing algorithm that utilizes the structure learning algorithm of Klivans and Meka 2017. In addition, we prove similar lower bounds for purely ferromagnetic RBMs with inconsistent external fields, and for the ferromagnetic Potts model. Previous hardness results for identity testing of Bez'akov'a et al. 2019 utilized the hardness of finding the maximum cuts, which corresponds to the ground states of the antiferromagnetic Ising model. Since RBMs are on bipartite graphs such an approach is not feasible. We instead introduce a general methodology to reduce from the corresponding approximate counting problem and utilize the phase transition that is exhibited by RBMs and the meanfield Potts model.
RealWorld Textured Things a Repository of Textured Models Generated with Modern PhotoReconstruction Tools ; We are witnessing a proliferation of textured 3D models captured from the real world with automatic photoreconstruction tools. Digital 3D models of this class come with a unique set of characteristics and defects especially concerning their parametrization setting them starkly apart from 3D models originating from other, more traditional, sources. We study this class of 3D models by collecting a significant number of representatives and quantitatively evaluating their quality according to several metrics. These include a new invariant metric we design to assess the fragmentation of the UV map, one of the main weaknesses hindering the usability of these models. Our results back the widely shared notion that such models are not fit for direct use in downstream applications such as videogames, and require challenging processing steps. Regrettably, existing automatic geometry processing tools are not always up to the task for example, we verify that available tools for UV optimization often fail due mesh inconsistencies, geometric and topological noise, excessive resolution, or other factors; moreover, even when an output is produced, it is rarely a significant improvement over the input according to the aforementioned measures. Therefore, we argue that further advancements are required specifically targeted at this class of models. Towards this goal, we share the models we collected in the form of a new public repository, RealWorld Textured Things RWTT, a benchmark to systematic fieldtest and compare algorithms. RWTT consists of 568 carefully selected textured 3D models representative of all the main modern offtheshelf photoreconstruction tools. The repository is available at httptexturedmesh.isti.cnr.it and is browsable by metadata collected during experiments, and comes with a tool, TexMetro, providing the same set of measures for generic UV mapped datasets.
Learning Various Length Dependence by Dual Recurrent Neural Networks ; Recurrent neural networks RNNs are widely used as a memory model for sequencerelated problems. Many variants of RNN have been proposed to solve the gradient problems of training RNNs and process long sequences. Although some classical models have been proposed, capturing longterm dependence while responding to shortterm changes remains a challenge. To this problem, we propose a new model named Dual Recurrent Neural Networks DuRNN. The DuRNN consists of two parts to learn the shortterm dependence and progressively learn the longterm dependence. The first part is a recurrent neural network with constrained full recurrent connections to deal with shortterm dependence in sequence and generate shortterm memory. Another part is a recurrent neural network with independent recurrent connections which helps to learn longterm dependence and generate longterm memory. A selection mechanism is added between two parts to help the needed longterm information transfer to the independent neurons. Multiple modules can be stacked to form a multilayer model for better performance. Our contributions are 1 a new recurrent model developed based on the divideandconquer strategy to learn long and shortterm dependence separately, and 2 a selection mechanism to enhance the separating and learning of different temporal scales of dependence. Both theoretical analysis and extensive experiments are conducted to validate the performance of our model, and we also conduct simple visualization experiments and ablation analyses for the model interpretability. Experimental results indicate that the proposed DuRNN model can handle not only very long sequences over 5000 time steps, but also short sequences very well. Compared with many stateoftheart RNN models, our model has demonstrated efficient and better performance.
DeBERTa Decodingenhanced BERT with Disentangled Attention ; Recent progress in pretrained neural language models has significantly improved the performance of many natural language processing NLP tasks. In this paper we propose a new model architecture DeBERTa Decodingenhanced BERT with disentangled attention that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pretraining. In addition, a new virtual adversarial training method is used for finetuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pretraining and the performance of both natural language understanding NLU and natural langauge generation NLG downstream tasks. Compared to RoBERTaLarge, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by 0.9 90.2 vs. 91.1, on SQuAD v2.0 by 2.3 88.4 vs. 90.7 and RACE by 3.6 83.2 vs. 86.8. Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark Wang et al., 2019a for the first time in terms of macroaverage score 89.9 versus 89.8, and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline by a decent margin 90.3 versus 89.8.
The New Generation Planetary Population Synthesis NGPPS. I. Bern global model of planet formation and evolution, model tests, and emerging planetary systems ; Aims. Comparing theoretical models with observations allows one to make key step forward towards an understanding of planetary systems. It however requires a model able to i predict all the necessary observable quantities not only masses and orbits, but also radii, luminosities, magnitudes, or evaporation rates and ii address the large range in relevant planetary masses from Mars mass to superJupiters and distances from stellargrazing to wide orbits. Methods. We have developed a combined global endtoend planetary formation and evolution model, the Generation III Bern model, based on the core accretion paradigm. This model solves as directly as possible the underlying differential equations for the structure and evolution of the gas disc, the dynamical state of the planetesimals, the internal structure of the planets yielding their planetesimal and gas accretion rates, discdriven orbital migration, and the gravitational interaction of concurrently forming planets via a full Nbody calculation. Importantly, the model also follows the longterm evolution of the planets on Gigayear timescales after formation including the effects of cooling and contraction, atmospheric escape, bloating, and stellar tides. Results. To test the model, we compared it with classical scenarios of Solar System formation. For the terrestrial planets, we find that we obtain a giant impact phase provided enough embryos 100 are initially emplaced in the disc. For the giant planets, we find that Jupitermass planets must accrete their core shortly before the dispersal of the gas disc to prevent strong inward migration that would bring them to the inner edge of the disc. Conclusions. The model can form planetary systems with a wide range of properties. We find that systems with only terrestrial planets are often wellordered while giantplanet bearing systems show no such similarity.
Anonymizing Machine Learning Models ; There is a known tension between the need to analyze personal data to drive business and privacy concerns. Many data protection regulations, including the EU General Data Protection Regulation GDPR and the California Consumer Protection Act CCPA, set out strict restrictions and obligations on the collection and processing of personal data. Moreover, machine learning models themselves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from the obligations set out in these regulations. It is therefore desirable to be able to create models that are anonymized, thus also exempting them from those obligations, in addition to providing better protection against attacks. Learning on anonymized data typically results in significant degradation in accuracy. In this work, we propose a method that is able to achieve better model accuracy by using the knowledge encoded within the trained model, and guiding our anonymization process to minimize the impact on the model's accuracy, a process we call accuracyguided anonymization. We demonstrate that by focusing on the model's accuracy rather than generic information loss measures, our method outperforms state of the art kanonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasiidentifiers. We also demonstrate that our approach has a similar, and sometimes even better ability to prevent membership inference attacks as approaches based on differential privacy, while averting some of their drawbacks such as complexity, performance overhead and modelspecific implementations. This makes modelguided anonymization a legitimate substitute for such methods and a practical approach to creating privacypreserving models.
Exactly solvable models for 21D topological phases derived from crossed modules of semisimple Hopf algebras ; We define an exactly solvable model for 21D topological phases of matter on a triangulated surface derived from a crossed module of semisimple finitedimensional Hopf algebras, the Hopfalgebraic higher Kitaev model'. This model generalizes both the Kitaev quantum double model for a semisimple Hopf algebra and the full higher Kitaev model derived from a 2group, and can hence be interpreted as a Hopfalgebraic discrete higher gauge theory. We construct a family of crossed modules of semisimple Hopf algebras, mathscrFmathbbCX otimes mathbbCE xrightarrowpartial mathscrFmathbbCY rtimes mathbbC G, triangleright, that depends on four finite groups, E,G,X and Y. We calculate the groundstate spaces of the resulting model on a triangulated surface when GE1 and when Y1, prove that those groundstate spaces are canonically independent of the triangulations, and so depend only on the underlying surface; and moreover we find a 21D TQFT whose state spaces on surfaces give the groundstate spaces. These TQFTs are particular cases of Quinn's finite total homotopy TQFT and hence the state spaces assigned to surfaces are free vector spaces on sets of homotopy classes of maps from a surface to homotopy finite spaces, in this case obtained as classifying spaces of finite groupoids and finite crossed modules of groupoids. We leave it as an open problem whether the groundstate space of the Hopfalgebraic higher Kitaev model on a triangulated surface is independent of the triangulation for general crossed modules of semisimple Hopf algebras, whether a TQFT always exists whose state space on a surface gives the groundstate space of the model, and whether the groundstate space of the model obtained from E,G,X,Y can always be given a homotopical explanation.
RBNN MemoryEfficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things ; Though deep neural network models exhibit outstanding performance for various applications, their large model size and extensive floatingpoint operations render deployment on mobile computing platforms a major challenge, and, in particular, on Internet of Things devices. One appealing solution is model quantization that reduces the model size and uses integer operations commonly supported by microcontrollers . To this end, a 1bit quantized DNN model or deep binary neural network maximizes the memory efficiency, where each parameter in a BNN model has only 1bit. In this paper, we propose a reconfigurable BNN RBNN to further amplify the memory efficiency for resourceconstrained IoT devices. Generally, the RBNN can be reconfigured on demand to achieve any one of M M1 distinct tasks with the same parameter set, thus only a single task determines the memory requirements. In other words, the memory utilization is improved by times M. Our extensive experiments corroborate that up to seven commonly used tasks can coexist the value of M can be larger. These tasks with a varying number of classes have no or negligible accuracy dropoff on three binarized popular DNN architectures including VGG, ResNet, and ReActNet. The tasks span across different domains, e.g., computer vision and audio domains validated herein, with the prerequisite that the model architecture can serve those crossdomain tasks. To protect the intellectual property of an RBNN model, the reconfiguration can be controlled by both a user key and a deviceunique root key generated by the intrinsic hardware fingerprint. By doing so, an RBNN model can only be used per paid user per authorized device, thus benefiting both the user and the model provider.
A review of matrix SIR Arino epidemic models ; Many of the models used nowadays in mathematical epidemiology, in particular in COVID19 research, belong to a certain subclass of compartmental models whose classes may be divided into three x, y, z groups, which we will call respectively susceptibleentrance, diseased, and output in the classic SIR case, there is only one class of each type. Roughly, the ODE dynamics of these models contain only linear terms, with the exception of products between x and y terms. It has long been noticed that the basic reproduction number R has a very simple formula 3.3 in terms of the matrices which define the model, and an explicit first integral formula 3.8 is also available. These results can be traced back at least to ABvdD07 and Fen07, respectively, and may be viewed as the basic laws of SIRtype epidemics; however many papers continue to reprove them in particular instances by the nextgeneration matrix method or by direct computations, which are unnecessary. This motivated us to redraw the attention to these basic laws and provide a selfcontained reference of related formulas for x, y, z models. We propose to rebaptize the class to which they apply as matrix SIR epidemic models, abbreviated as SYR, to emphasize the similarity to the classic SIR case. For the case of one susceptible class, we propose to use the name SIRPH, due to a simple probabilistic interpretation as SIR models where the exponential infection time has been replaced by a PHtype distribution. We note that to each SIRPH model, one may associate a scalar quantity Yt which satisfies classic SIR relations, see 3.8. In the case of several susceptible classes, this generalizes to 5.10; in a future paper, we will show that 3.8, 5.10 may be used to obtain approximate control policies which compare well with the optimal control of the original model.
Primordial Black Hole Merger Rate in SelfInteracting Dark Matter Halo Models ; We study the merger rate of primordial black holes PBHs in selfinteracting dark matter SIDM halo models. To explore a numerical description for the density profile of SIDM halo models, we use the result of a previously performed simulation for SIDM halo models with sigmam10rm cm2g1. We also propose a concentrationmasstime relation that can explain the evolution of the halo density profile related to SIDM models. Furthermore, we investigate the encounter condition of PBHs that may have been randomly distributed in the medium of dark matter halos. Under these assumptions, we calculate the merger rate of PBHs within each halo considering SIDM halo models and compare the results with that obtained for cold dark matter CDM halo models. To do this, we employ the definition of the time after halo virialization as a function of halo mass. We indicate that SIDM halo models for frm PBH0.32 can generate sufficient PBH mergers in such a way that those exceed the one resulted from CDM halo models. By considering the sphericalcollapse halo mass function, we obtain similar results for the cumulative merger rate of PBHs. Moreover, we calculate the redshift evolution of the PBH total merger rate. To determine a constraint on the PBH abundance, we study the merger rate of PBHs in terms of their fraction and masses and compare those with the black hole merger rate estimated by the Advanced LIGO aLIGOAdvanced Virgo aVirgo detectors during the third observing run. The results demonstrate that within the context of SIDM halo models, the merger rate of 10Modot10Modot events can potentially fall within the aLIGOaVirgo window. We also estimate a relation between the fraction of PBHs and their masses, which is well consistent with our findings.
Microbiome compositional analysis with logistictree normal models ; Modern microbiome compositional data are often highdimensional and exhibit complex dependency among the microbial taxa. However, existing statistical models for such data either do not adequately account for the dependency among the microbial taxa or lack computational scalability with respect to the number of taxa. This presents challenges in important applications such as association analysis between microbiome compositions and disease risk in which valid statistical analysis requires appropriately incorporating the variance components or random effects in the microbiome composition. We introduce a generative model, called the logistictree normal LTN model, that addresses this need. LTN marries two popular classes of modelsnamely, logratio normal LN and Dirichlettree DTand inherits the key benefits of each. LTN incorporates the treebased binomial decomposition as the DT does, but it jointly models the corresponding binomial probabilities using a multivariate logisticnormal distribution as in LN models. It therefore allows rich covariance structures as LN, along with computational efficiency realized through a PolyaGamma augmentation on the binomial models associated with the tree splits. Accordingly, Bayesian inference on LTN can readily proceed by Gibbs sampling. LTN also allows common techniques for effective inference on highdimensional data to be readily incorporated. We construct a general mixedeffects model using LTN to characterize compositional random effects, which allows flexible taxa covariance. We demonstrate its use in testing association between microbiome composition and disease risk as well as in estimating the covariance among taxa. We carry out an extensive case study using this LTNenriched compositional mixedeffects model to analyze a longitudinal dataset from the T1D cohort of the DIABIMMUNE project.
Cosmographic approach to Running Vacuum dark energy models new constraints using BAOs and Hubble diagrams at higher redshifts ; In this work we study different types of dark energy DE models in the framework of the cosmographic approach, with emphasis on the Running Vacuum models RVMs. We assess their viability using different information criteria and compare them with the socalled Ghost DE models GDEs as well as with the concordance LambdaCDM model. We use the Hubble diagrams for Pantheon SnIa, quasars QSOs, gammaray bursts GRBs as well as the data on baryonic acoustic oscillations BAOs in four different combinations. Upon minimizing the chi2 function of the distance modulus in the context of the Markov Chain Monte Carlo method MCMC, we put constraints on the current values of the standard cosmographic parameters in a modelindependent way. It turns out that, in the absence of BAOs data, the various DE models generally exhibit cosmographic tensions with the observations at the highest redshifts namely with the QSOs and GRBs data. However, if we include the robust observations from BAOs to our cosmographic sample, the LambdaCDM and RVMs are clearly favored against the GDEs. Finally, judging from the perspective of the deviance information criterion DIC, which enables us to compare models making use of the Markov chains of the MCMC method, we conclude that the RVMs are the preferred kind of DE models. We find it remarkable that these models, which had been previously shown to be capable of alleviating the sigma8 and H0 tensions, appear now also as the most successful ones at the level of the cosmographic analysis.
Quantitative Predictions for fR Gravity Primordial Gravitational Waves ; In this work we shall develop a quantitative approach for extracting predictions on the primordial gravitational waves energy spectrum for fR gravity. We shall consider two distinct models which yield different phenomenology, one pure fR gravity model and one ChernSimons corrected potentialless kessence fR gravity model in the presence of radiation and nonrelativistic perfect matter fluids. The two fR gravity models were carefully chosen in order for them to describe in a unified way inflation and the dark energy era, in both cases viable and compatible with the latest Planck data. Also both models mimic the LambdaColdDarkMatter model and specifically the pure fR model only at late times, but the ChernSimons kessence model during the whole evolution of the model up to the radiation domination era. In addition they guarantee a smooth transition from the inflationary era to the radiation, matter domination and subsequently to the dark energy era. Using a WKB approach introduced in the relevant literature by Nishizawa, we derive formulas depending on the redshift that yield the modified gravity effect, quantified by a multiplicative factor, a damping'' in front of the General Relativistic waveform. In order to calculate the effect of the modified gravity, which is the damping'' factor, we solve numerically the Friedmann equations using appropriate initial conditions and by introducing specific statefinder quantities. As we show, the pure fR gravity gravitational wave energy spectrum is slightly enhanced, but it remains well below the sensitivity curves of future gravitational waves experiments. In contrast, the ChernSimons kessence fR gravity model gravitational wave energy spectrum is significantly enhanced and two signals are predicted which can be verified by future gravitational wave experiments.
Fast full Nbody simulations of generic modified gravity conformal coupling models ; We present MGGLAM, a code developed for the very fast production of full Nbody cosmological simulations in modified gravity MG models. We describe the implementation, numerical tests and first results of a large suite of cosmological simulations for three classes of MG models with conformal coupling terms the fR gravity, symmetron and coupled quintessence models. Derived from the parallel particlemesh code GLAM, MGGLAM incorporates an efficient multigrid relaxation technique to solve the characteristic nonlinear partial differential equations of these models. For fR gravity, we have included new variants to diversify the model behaviour, and we have tailored the relaxation algorithms to these to maintain high computational efficiency. In a companion paper, we describe versions of this code developed for derivative coupling MG models, including the Vainshtein and Kmouflagetype models. MGGLAM can model the prototypes for most MG models of interest, and is broad and versatile. The code is highly optimised, with a tremendous speedup of a factor of more than a hundred compared with earlier Nbody codes, while still giving accurate predictions of the matter power spectrum and dark matter halo abundance. MGGLAM is ideal for the generation of large numbers of MG simulations that can be used in the construction of mock galaxy catalogues and the production of accurate emulators for ongoing and future galaxy surveys.
Physicsinformed linear regression is competitive with two Machine Learning methods in residential building MPC ; Because physicsbased building models are difficult to obtain as each building is individual, there is an increasing interest in generating models suitable for building MPC directly from measurement data. Machine learning methods have been widely applied to this problem and validated mostly in simulation; there are, however, few studies on a direct comparison of different models or validation in real buildings to be found in the literature. Methods that are indeed validated in application often lead to computationally complex nonconvex optimization problems. Here we compare physicsinformed AutoregressiveMovingAverage with Exogenous Inputs ARMAX models to Machine Learning models based on Random Forests and Input Convex Neural Networks and the resulting convex MPC schemes in experiments on a practical building application with the goal of minimizing energy consumption while maintaining occupant comfort, and in a numerical case study. We demonstrate that Predictive Control in general leads to savings between 26 and 49 of heating and cooling energy, compared to the building's baseline hysteresis controller. Moreover, we show that all model types lead to satisfactory control performance in terms of constraint satisfaction and energy reduction. However, we also see that the physicsinformed ARMAX models have a lower computational burden, and a superior sample efficiency compared to the Machine Learning based models. Moreover, even if abundant training data is available, the ARMAX models have a significantly lower prediction error than the Machine Learning models, which indicates that the encoded physicsbased prior of the former cannot independently be found by the latter.
Application of deep learning to camera trap data for ecologists in planning engineering Can captivity imagery train a model which generalises to the wild ; Understanding the abundance of a species is the first step towards understanding both its longterm sustainability and the impact that we may be having upon it. Ecologists use camera traps to remotely survey for the presence of specific animal species. Previous studies have shown that deep learning models can be trained to automatically detect and classify animals within camera trap imagery with high levels of confidence. However, the ability to train these models is reliant upon having enough highquality training data. What happens when the animal is rare or the data sets are nonexistent This research proposes an approach of using images of rare animals in captivity focusing on the Scottish wildcat to generate the training dataset. We explore the challenges associated with generalising a model trained on captivity data when applied to data collected in the wild. The research is contextualised by the needs of ecologists in planningengineering. Following precedents from other research, this project establishes an ensemble for object detection, image segmentation and image classification models which are then tested using different image manipulation and class structuring techniques to encourage model generalisation. The research concludes, in the context of Scottish wildcat, that models trained on captivity imagery cannot be generalised to wild camera trap imagery using existing techniques. However, final model performances based on a twoclass model Wildcat vs Not Wildcat achieved an overall accuracy score of 81.6 and Wildcat accuracy score of 54.8 on a test set in which only 1 of images contained a wildcat. This suggests using captivity images is feasible with further research. This is the first research which attempts to generate a training set based on captivity data and the first to explore the development of such models in the context of ecologists in planningengineering.
ERNIE 3.0 Titan Exploring Largerscale Knowledge Enhanced Pretraining for Language Understanding and Generation ; Pretrained language models have achieved stateoftheart results in various Natural Language Processing NLP tasks. GPT3 has shown that scaling up pretrained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pretraining largescale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the stateoftheart models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundredbillionparameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, we design a selfsupervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts. To reduce the computation overhead and carbon emission, we propose an online distillation framework for ERNIE 3.0 Titan, where the teacher model will teach students and train itself simultaneously. ERNIE 3.0 Titan is the largest Chinese dense pretrained model so far. Empirical results show that the ERNIE 3.0 Titan outperforms the stateoftheart models on 68 NLP datasets.
Observationbased modelling of the energetic storm particle event of 14 July 2012 ; We model the energetic storm particle ESP event of 14 July 2012 using the energetic particle acceleration and transport model named PARADISE, together with the solar wind and coronal mass ejection CME model named EUHFORIA. The simulation results illustrate both the capabilities and limitations of the utilised models. We show that the models capture some essential structural features of the ESP event; however, for some aspects the simulations and observations diverge. We describe and, to some extent, assess the sources of errors in the modelling chain of EUHFORIA and PARADISE and discuss how they may be mitigated in the future. The PARADISE model evolves energetic particle distributions in a background solar wind generated by the ideal MHD module of EUHFORIA. The CME generating the ESP event is simulated by using the spheromak model of EUHFORIA, which approximates the CME's flux rope as a linear forcefree spheroidal magnetic field. In addition, a tool was developed to trace CMEdriven shock waves in the EUHFORIA simulation domain. This tool is used in PARADISE to i inject 50 keV protons continuously at the CMEdriven shock and ii include a foreshock and a sheath region, in which the energetic particle parallel mean free path, lambdaparallel, decreases towards the shock wave. The value of lambdaparallel at the shock wave is estimated from in situ observations of the ESP event. For energies below 1 MeV, the simulation results agree well with both the upstream and downstream components of the ESP event observed by the Advanced Composition Explorer ACE. This suggests that these lowenergy protons are mainly the result of interplanetary particle acceleration. In the downstream region, the sharp drop in the energetic particle intensities is reproduced at the entry into the following magnetic cloud, illustrating the importance of a magnetised CME model.
Advantages and Disadvantages of Dedicated Model Transformation Languages A Qualitative Interview Study ; Model driven development envisages the use of model transformations to evolve models. Model transformation languages, developed for this task, are touted with many benefits over general purpose programming languages. However, a large number of these claims have not yet been substantiated. They are also made without the context necessary to be able to critically assess their merit or built meaningful empirical studies around them. The objective of our work is to elicit the reasoning, influences and background knowledge that lead people to assume benefits or drawbacks of model transformation languages. We conducted a largescale interview study involving 56 participants from research and industry. Interviewees were presented with claims about model transformation languages and were asked to provide reasons for their assessment thereof. We qualitatively analysed the responses to find factors that influence the properties of model transformation languages as well as explanations as to how exactly they do so. Our interviews show, that general purpose expressiveness of GPLs, domain specific capabilities of MTLs as well as tooling all have strong influences on how people view properties of model transformation languages. Moreover, the Choice of MTL, the Use Case for which a transformation should be developed as well as the Skills of involved stakeholders have a moderating effect on the influences, by changing the context to consider. There is a broad body of experience, that suggests positive and negative influences for properties of MTLs. Our data suggests, that much needs to be done in order to convey the viability of model transformation languages. Efforts to provide more empirical substance need to be undergone and lackluster language capabilities and tooling need to be improved upon. We suggest several approaches for this that can be based on the results of the presented study.
AIenabled Automatic Multimodal Fusion of ConeBeam CT and Intraoral Scans for Intelligent 3D ToothBone Reconstruction and Clinical Applications ; A critical step in virtual dental treatment planning is to accurately delineate all toothbone structures from CBCT with high fidelity and accurate anatomical information. Previous studies have established several methods for CBCT segmentation using deep learning. However, the inherent resolution discrepancy of CBCT and the loss of occlusal and dentition information largely limited its clinical applicability. Here, we present a Deep Dental Multimodal Analysis DDMA framework consisting of a CBCT segmentation model, an intraoral scan IOS segmentation model the most accurate digital dental model, and a fusion model to generate 3D fused crownrootbone structures with high fidelity and accurate occlusal and dentition information. Our model was trained with a largescale dataset with 503 CBCT and 28,559 IOS meshes manually annotated by experienced human experts. For CBCT segmentation, we use a fivefold cross validation test, each with 50 CBCT, and our model achieves an average Dice coefficient and IoU of 93.99 and 88.68, respectively, significantly outperforming the baselines. For IOS segmentations, our model achieves an mIoU of 93.07 and 95.70 on the maxillary and mandible on a test set of 200 IOS meshes, which are 1.77 and 3.52 higher than the stateofart method. Our DDMA framework takes about 20 to 25 minutes to generate the fused 3D mesh model following the sequential processing order, compared to over 5 hours by human experts. Notably, our framework has been incorporated into a software by a clear aligner manufacturer, and realworld clinical cases demonstrate that our model can visualize crownrootbone structures during the entire orthodontic treatment and can predict risks like dehiscence and fenestration. These findings demonstrate the potential of multimodal deep learning to improve the quality of digital dental models and help dentists make better clinical decisions.
Global geomagnetic perturbation forecasting using Deep Learning ; Geomagnetically Induced Currents GICs arise from spatiotemporal changes to Earth's magnetic field which arise from the interaction of the solar wind with Earth's magnetosphere, and drive catastrophic destruction to our technologically dependent society. Hence, computational models to forecast GICs globally with large forecast horizon, high spatial resolution and temporal cadence are of increasing importance to perform prompt necessary mitigation. Since GIC data is proprietary, the time variability of horizontal component of the magnetic field perturbation dBdt is used as a proxy for GICs. In this work, we develop a fast, global dBdt forecasting model, which forecasts 30 minutes into the future using only solar wind measurements as input. The model summarizes 2 hours of solar wind measurement using a Gated Recurrent Unit, and generates forecasts of coefficients which are folded with a spherical harmonic basis to enable global forecasts. When deployed, our model produces results in under a second, and generates global forecasts for horizontal magnetic perturbation components at 1minute cadence. We evaluate our model across models in literature for two specific storms of 5 August 2011 and 17 March 2015, while having a selfconsistent benchmark model set. Our model outperforms, or has consistent performance with stateofthepractice high time cadence local and low time cadence global models, while also outperforminghaving comparable performance with the benchmark models. Such quick inferences at high temporal cadence and arbitrary spatial resolutions may ultimately enable accurate forewarning of dBdt for any place on Earth, resulting in precautionary measures to be taken in an informed manner.
Simple lessons from complex learning what a neural network model learns about cosmic structure formation ; We train a neural network model to predict the full phase space evolution of cosmological Nbody simulations. Its success implies that the neural network model is accurately approximating the Green's function expansion that relates the initial conditions of the simulations to its outcome at later times in the deeply nonlinear regime. We test the accuracy of this approximation by assessing its performance on well understood simple cases that have either known exact solutions or well understood expansions. These scenarios include spherical configurations, isolated plane waves, and two interacting plane waves initial conditions that are very different from the Gaussian random fields used for training. We find our model generalizes well to these well understood scenarios, demonstrating that the networks have inferred general physical principles and learned the nonlinear mode couplings from the complex, random Gaussian training data. These tests also provide a useful diagnostic for finding the model's strengths and weaknesses, and identifying strategies for model improvement. We also test the model on initial conditions that contain only transverse modes, a family of modes that differ not only in their phases but also in their evolution from the longitudinal growing modes used in the training set. When the network encounters these initial conditions that are orthogonal to the training set, the model fails completely. In addition to these simple configurations, we evaluate the model's predictions for the density, displacement, and momentum power spectra with standard initial conditions for Nbody simulations. We compare these summary statistics against Nbody results and an approximate, fast simulation method called COLA. Our model achieves percent level accuracy at nonlinear scales of ksim 1 mathrmMpc1, h, representing a significant improvement over COLA.
Covariances of density probability distribution functions. Lessons from hierarchical models ; Context Statistical properties of the cosmic density fields are to a large extent encoded in the shape of the onepoint density probability distribution functions PDF. In order to successfully exploit such observables, a detailed functional form of the covariance matrix of the onepoint PDF is needed. Aims The objectives are to model the properties of this covariance for general stochastic density fields in a cosmological context. Methods Leading and subleading contributions to the covariance were identified within a large class of models, the socalled hierarchical models. The validity of the proposed forms for the covariance matrix was assessed with the help of a toy model, the minimum tree model, for which a corpus of exact results could be obtained forms of the one and twopoint PDF, largescale densitybias functions, and full covariance matrix of the onepoint PDF. Results It is first shown that the covariance matrix elements are directly related to the spatial average of the twopoint density PDF within the sample. The dominant contribution to this average is explicitly given for hierarchical models, which leads to the construction of specific densitybias functions. However, this contribution alone cannot be used to construct an operational likelihood function. Short distance effects are found to be have an important impact but are more difficult to derive as they depend more on the details of the model. However, a simple and generic form of these contributions is proposed. Detailed comparisons in the context of the RayleighLevy flight model show that the largescale effects capture the bulk of the supersample effects and that, by adding the shortdistance contributions, a qualitatively correct model of the likelihood function can be obtained.
SETARTree A Novel and Accurate Tree Algorithm for Global Time Series Forecasting ; Threshold Autoregressive TAR models have been widely used by statisticians for nonlinear time series forecasting during the past few decades, due to their simplicity and mathematical properties. On the other hand, in the forecasting community, generalpurpose treebased regression algorithms forests, gradientboosting have become popular recently due to their ease of use and accuracy. In this paper, we explore the close connections between TAR models and regression trees. These enable us to use the rich methodology from the literature on TAR models to define a hierarchical TAR model as a regression tree that trains globally across series, which we call SETARTree. In contrast to the generalpurpose treebased models that do not primarily focus on forecasting, and calculate averages at the leaf nodes, we introduce a new forecastingspecific tree algorithm that trains global Pooled Regression PR models in the leaves allowing the models to learn crossseries information and also uses some timeseriesspecific splitting and stopping procedures. The depth of the tree is controlled by conducting a statistical linearity test commonly employed in TAR models, as well as measuring the error reduction percentage at each node split. Thus, the proposed tree model requires minimal external hyperparameter tuning and provides competitive results under its default configuration. We also use this tree algorithm to develop a forest where the forecasts provided by a collection of diverse SETARTrees are combined during the forecasting process. In our evaluation on eight publicly available datasets, the proposed tree and forest models are able to achieve significantly higher accuracy than a set of stateoftheart treebased algorithms and forecasting benchmarks across four evaluation metrics.
Constraining fR Gravity Models with The LateTime Cosmological Evolution ; The fR Modified Gravity is a modification of Einstein's general theory of relativity, which aims to explain issues beyond The Standard Model of Cosmology such as dark energy and dark matter. As a theory of gravitation that govern major dynamics on the large scale of the universe, an fR model should be able to explain the transition from a matterdominated universe to a darkenergydominated universe. Assuming that the density parameter of the radiation can be neglected during the transition from a matterdominated universe to a darkenergydominated universe, we find some fixed points regarding the dynamical stability of the density parameters of the model. The phase transition can be achieved if the fR model can connect the fixed point P5 representing the matterdominated era to the fixed point P1 representing the dark energydominated era. The method to evaluate that state transition is called the Fixedpoint analysis. In this study, we analyze the viability of fR models proposed by Starobinsky, HuSawicki, and GogoiGoswami regarding the phase transition from a matterdominated universe to a darkenergydominated universe. It is shown that those models are viable by choosing some set of appropriate parameters. For example, in the Starobinsky and HuSawicki models, the parameter mu can be chosen to correspond to the lower bound of xd R1Rc, where R1 represents the deSitter point. Meanwhile, for the GogoiGuswami model, the same results can be achieved by taking alpha and beta parameters satisfying the existence and stability conditions for the deSitter point. From these results, it can be concluded that those fR models allow such phase transitions of the universe to realize the latetime accelerated expansion.
Toward Physically Plausible DataDriven Models A Novel Neural Network Approach to Symbolic Regression ; Many realworld systems can be described by mathematical models that are humancomprehensible, easy to analyze and help explain the system's behavior. Symbolic regression is a method that can automatically generate such models from data. Historically, symbolic regression has been predominantly realized by genetic programming, a method that evolves populations of candidate solutions that are subsequently modified by genetic operators crossover and mutation. However, this approach suffers from several deficiencies it does not scale well with the number of variables and samples in the training data models tend to grow in size and complexity without an adequate accuracy gain, and it is hard to finetune the model coefficients using just genetic operators. Recently, neural networks have been applied to learn the whole analytic model, i.e., its structure and the coefficients, using gradientbased optimization algorithms. This paper proposes a novel neural networkbased symbolic regression method that constructs physically plausible models based on even very small training data sets and prior knowledge about the system. The method employs an adaptive weighting scheme to effectively deal with multiple loss function terms and an epochwise learning process to reduce the chance of getting stuck in poor local optima. Furthermore, we propose a parameterfree method for choosing the model with the best interpolation and extrapolation performance out of all the models generated throughout the whole learning process. We experimentally evaluate the approach on four test systems the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the antilock braking system. The results clearly show the potential of the method to find parsimonious models that comply with the prior knowledge provided.
Mean field models of flux transport dynamo and meridional circulation in the Sun and stars ; The most widely accepted model of the solar cycle is the flux transport dynamo model. This model evolved out of the traditional alpha Omega dynamo model which was first developed at a time when the existence of the Sun's meridional circulation was not known. In these models, the toroidal magnetic field which gives rise to sunspots is generated by the stretching of the poloidal field by solar differential rotation. The primary source of the poloidal field in the flux transport models is attributed to the BabcockLeighton mechanism, in contrast to the meanfield alphaeffect used in earlier models. With the realization that the Sun has a meridional circulation, which is poleward at the surface and is expected to be equatorward at the bottom of the convection zone, its importance for transporting the magnetic fields in the dynamo process was recognized. Much of our understanding about the physics of both the meridional circulation and the flux transport dynamo has come from the mean field theory obtained by averaging the equations of MHD over turbulent fluctuations. The mean field theory of meridional circulation makes clear how it arises out of an interplay between the centrifugal and thermal wind terms. We provide a broad review of mean field theories for solar magnetic fields and flows, the flux transport dynamo modeling paradigm and highlight some of their applications to solar and stellar magnetic cycles. We also discuss how the dynamogenerated magnetic field acts on the meridional circulation of the Sun and how the fluctuations in the meridional circulation, in turn, affect the solar dynamo. We conclude with some remarks on how the synergy of mean field theories, flux transport dynamo models, and direct numerical simulations can inspire the future of this field.
Domainknowledge Inspired Pseudo Supervision DIPS for Unsupervised ImagetoImage Translation Models to Support CrossDomain Classification ; The ability to classify images is dependent on having access to large labeled datasets and testing on data from the same domain that the model can train on. Classification becomes more challenging when dealing with new data from a different domain, where gathering and especially labeling a larger image dataset for retraining a classification model requires a laborintensive human effort. Crossdomain classification frameworks were developed to handle this data domain shift problem by utilizing unsupervised imagetoimage translation models to translate an input image from the unlabeled domain to the labeled domain. The problem with these unsupervised models lies in their unsupervised nature. For lack of annotations, it is not possible to use the traditional supervised metrics to evaluate these translation models to pick the bestsaved checkpoint model. This paper introduces a new method called Domainknowledge Inspired Pseudo Supervision DIPS which utilizes domaininformed Gaussian Mixture Models to generate pseudo annotations to enable the use of traditional supervised metrics. This method was designed specifically to support crossdomain classification applications contrary to other typically used metrics such as the FID which were designed to evaluate the model in terms of the quality of the generated image from a humaneye perspective. DIPS proves its effectiveness by outperforming various GAN evaluation metrics, including FID, when selecting the optimal saved checkpoint model. It is also evaluated against truly supervised metrics. Furthermore, DIPS showcases its robustness and interpretability by demonstrating a strong correlation with truly supervised metrics, highlighting its superiority over existing stateoftheart alternatives. The code and data to replicate the results can be found on the official Github repository httpsgithub.comHindawi91DIPS
Finegrained Audible Video Description ; We explore a new task for audiovisuallanguage modeling called finegrained audible video description FAVD. It aims to provide detailed textual descriptions for the given audible videos, including the appearance and spatial locations of each object, the actions of moving objects, and the sounds in videos. Existing visuallanguage modeling tasks often concentrate on visual cues in videos while undervaluing the language and audio modalities. On the other hand, FAVD requires not only audiovisuallanguage modeling skills but also paragraphlevel language generation abilities. We construct the first finegrained audible video description benchmark FAVDBench to facilitate this research. For each video clip, we first provide a onesentence summary of the video, ie, the caption, followed by 46 sentences describing the visual details and 12 audiorelated descriptions at the end. The descriptions are provided in both English and Chinese. We create two new metrics for this task an EntityScore to gauge the completeness of entities in the visual descriptions, and an AudioScore to assess the audio descriptions. As a preliminary approach to this task, we propose an audiovisuallanguage transformer that extends existing video captioning model with an additional audio branch. We combine the masked language modeling and autoregressive language modeling losses to optimize our model so that it can produce paragraphlevel descriptions. We illustrate the efficiency of our model in audiovisuallanguage modeling by evaluating it against the proposed benchmark using both conventional captioning metrics and our proposed metrics. We further put our benchmark to the test in video generation models, demonstrating that employing finegrained video descriptions can create more intricate videos than using captions.
TaskMatrix.AI Completing Tasks by Connecting Foundation Models with Millions of APIs ; Artificial Intelligence AI has made incredible progress recently. On the one hand, advanced foundation models like ChatGPT can offer powerful conversation, incontext learning and code generation abilities on a broad range of opendomain tasks. They can also generate highlevel solution outlines for domainspecific tasks based on the common sense knowledge they have acquired. However, they still face difficulties with some specialized tasks because they lack enough domainspecific data during pretraining or they often have errors in their neural network computations on those tasks that need accurate executions. On the other hand, there are also many existing models and systems symbolicbased or neuralbased that can do some domainspecific tasks very well. However, due to the different implementation or working mechanisms, they are not easily accessible or compatible with foundation models. Therefore, there is a clear and pressing need for a mechanism that can leverage foundation models to propose task solution outlines and then automatically match some of the subtasks in the outlines to the offtheshelf models and systems with special functionalities to complete them. Inspired by this, we introduce TaskMatrix.AI as a new AI ecosystem that connects foundation models with millions of APIs for task completion. Unlike most previous work that aimed to improve a single AI model, TaskMatrix.AI focuses more on using existing foundation models as a brainlike central system and APIs of other AI models and systems as subtask solvers to achieve diversified tasks in both digital and physical domains. As a position paper, we will present our vision of how to build such an ecosystem, explain each key component, and use study cases to illustrate both the feasibility of this vision and the main challenges we need to address next.
SurgicalGPT EndtoEnd LanguageVision GPT for Visual Question Answering in Surgery ; Advances in GPTbased large language models LLMs are revolutionizing natural language processing, exponentially increasing its use across various domains. Incorporating unidirectional attention, these autoregressive LLMs can generate long and coherent paragraphs. However, for visual question answering VQA tasks that require both vision and language processing, models with bidirectional attention or models employing fusion techniques are often employed to capture the context of multiple modalities all at once. As GPT does not natively process vision tokens, to exploit the advancements in GPT models for VQA in robotic surgery, we design an endtoend trainable LanguageVision GPT LVGPT model that expands the GPT2 model to include vision input image. The proposed LVGPT incorporates a feature extractor vision tokenizer and vision token embedding token type and pose. Given the limitations of unidirectional attention in GPT models and their ability to generate coherent long paragraphs, we carefully sequence the word tokens before vision tokens, mimicking the human thought process of understanding the question to infer an answer from an image. Quantitatively, we prove that the LVGPT model outperforms other stateoftheart VQA models on two publically available surgicalVQA datasets based on endoscopic vision challenge robotic scene segmentation 2018 and CholecTriplet2021 and on our newly annotated dataset based on the holistic surgical scene dataset. We further annotate all three datasets to include questiontype annotations to allow subtype analysis. Furthermore, we extensively study and present the effects of token sequencing, token type and pose embedding for vision tokens in the LVGPT model.
Evaluating OpenDomain Question Answering in the Era of Large Language Models ; Lexical matching remains the de facto evaluation method for opendomain question answering QA. Unfortunately, lexical matching fails completely when a plausible candidate answer does not appear in the list of gold answers, which is increasingly the case as we shift from extractive to generative models. The recent success of large language models LLMs for QA aggravates lexical matching failures since candidate answers become longer, thereby making matching with the gold answers even more challenging. Without accurate evaluation, the true progress in opendomain QA remains unknown. In this paper, we conduct a thorough analysis of various opendomain QA models, including LLMs, by manually evaluating their answers on a subset of NQopen, a popular benchmark. Our assessments reveal that while the true performance of all models is significantly underestimated, the performance of the InstructGPT zeroshot LLM increases by nearly 60, making it on par with existing top models, and the InstructGPT fewshot model actually achieves a new stateoftheart on NQopen. We also find that more than 50 of lexical matching failures are attributed to semantically equivalent answers. We further demonstrate that regex matching ranks QA models consistent with human judgments, although still suffering from unnecessary strictness. Finally, we demonstrate that automated evaluation models are a reasonable surrogate for lexical matching in some circumstances, but not for longform answers generated by LLMs. The automated models struggle in detecting hallucinations in LLM answers and are thus unable to evaluate LLMs. At this time, there appears to be no substitute for human evaluation.
Successive Affine Learning for Deep Neural Networks ; This paper introduces a successive affine learning SAL model for constructing deep neural networks DNNs. Traditionally, a DNN is built by solving a nonconvex optimization problem. It is often challenging to solve such a problem numerically due to its nonconvexity and having a large number of layers. To address this challenge, inspired by the human education system, the multigrade deep learning MGDL model was recently initiated by the author of this paper. The MGDL model learns a DNN in several grades, in each of which one constructs a shallow DNN consisting of a relatively small number of layers. The MGDL model still requires solving several nonconvex optimization problems. The proposed SAL model mutates from the MGDL model. Noting that each layer of a DNN consists of an affine map followed by an activation function, we propose to learn the affine map by solving a quadraticconvex optimization problem which involves the activation function only it after the weight matrix and the bias vector for the current layer have been trained. In the context of function approximation, for a given function the SAL model generates an expansion of the function with adaptive basis functions in the form of DNNs. We establish the Pythagorean identity and the Parseval identity for the system generated by the SAL model. Moreover, we provide a convergence theorem of the SAL process in the sense that either it terminates after a finite number of grades or the norms of its optimal error functions strictly decrease to a limit as the grade number increases to infinity. Furthermore, we present numerical examples of proof of concept which demonstrate that the proposed SAL model significantly outperforms the traditional deep learning model.
Valley Video Assistant with Large Language model Enhanced abilitY ; Recently, several multimodal models have been developed for joint image and language understanding, which have demonstrated impressive chat abilities by utilizing advanced large language models LLMs. The process of developing such models is straightforward yet effective. It involves pretraining an adaptation module to align the semantics of the vision encoder and language model, followed by finetuning on the instructionfollowing data. However, despite the success of this pipeline in image and language understanding, its effectiveness in joint video and language understanding has not been widely explored. In this paper, we aim to develop a novel multimodal foundation model capable of perceiving video, image, and language within a general framework. To achieve this goal, we introduce Valley Video Assistant with Large Language model Enhanced ability. Specifically, our proposed Valley model is designed with a simple projection module that bridges video, image, and language modalities, and is further unified with a multilingual LLM. We also collect multisource visiontext pairs and adopt a spatiotemporal pooling strategy to obtain a unified vision encoding of video and image input for pretraining. Furthermore, we generate multitask instructionfollowing video data, including multishot captions, long video descriptions, action recognition, causal relationship inference, etc. To obtain the instructionfollowing data, we design diverse rounds of taskoriented conversations between humans and videos, facilitated by ChatGPT. Qualitative examples demonstrate that our proposed model has the potential to function as a highly effective multilingual video assistant that can make complex video understanding scenarios easy. Code, data, and models will be available at httpsgithub.comRupertLuoValley.
On Hate Scaling Laws For DataSwamps ; Scale the model, scale the data, scale the GPUfarms' is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts remain under explored. This is especially of critical importance in the context of visiolinguistic datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump. This large scale datadump, which is known to have numerous drawbacks, is repeatedly mined and serves as the datamotherlode for large generative models. In this paper, we 1 investigate the effect of scaling datasets on hateful content through a comparative audit of the LAION400M and LAION2Ben, containing 400 million and 2 billion samples respectively, and 2 evaluate the downstream impact of scale on visiolinguistic models trained on these dataset variants by measuring racial bias of the models trained on them using the Chicago Face Dataset CFD as a probe. Our results show that 1 the presence of hateful content in datasets, when measured with a Hate Content Rate HCR metric on the inferences of the Pysentimiento hatedetection Natural Language Processing NLP model, increased by nearly 12 and 2 societal biases and negative stereotypes were also exacerbated with scale on the models we evaluated. As scale increased, the tendency of the model to associate images of human faces with the human being' class over 7 other offensive classes reduced by half. Furthermore, for the Black female category, the tendency of the model to associate their faces with the criminal' class doubled, while quintupling for Black male faces. We present a qualitative and historical analysis of the model audit results, reflect on our findings and its implications for dataset curation practice, and close with a summary of our findings and potential future work to be done in this area.
Foundational Models Defining a New Era in Vision A Survey and Outlook ; Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world. The complex relations between objects and their locations, ambiguities, and variations in the realworld environment can be better described in human language, naturally governed by grammatical rules and other modalities such as audio and depth. The models learned to bridge the gap between such modalities coupled with largescale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time. These models are referred to as foundational models. The output of such models can be modified through humanprovided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions. In this survey, we provide a comprehensive review of such emerging foundational models, including typical architecture designs to combine different modalities vision, text, audio, etc, training objectives contrastive, generative, pretraining datasets, finetuning mechanisms, and the common prompting patterns; textual, visual, and heterogeneous. We discuss the open challenges and research directions for foundational models in computer vision, including difficulties in their evaluations and benchmarking, gaps in their realworld understanding, limitations of their contextual understanding, biases, vulnerability to adversarial attacks, and interpretability issues. We review recent developments in this field, covering a wide range of applications of foundation models systematically and comprehensively. A comprehensive list of foundational models studied in this work is available at urlhttpsgithub.comawaisraufAwesomeCVFoundationalModels.
Effects of Particle sizes, NonIsometry and Interactions in Compressible Polymer Mixtures ; We consider in this review the statistical mechanical description of a very general microscopic lattice model of a compressible and interacting multicomponent mixture of linear polymers of fixed lengths. The model contains several microscopic, i.e. bare parameters determining the thermodynamic state of the system. General arguments are given to show that these parameters must be independent not only of the lattice properties but also of the thermodynamic state, and that the voids representing free volume must be carefully treated, if thermodynamics has to be properly obeyed. These facts have not always been appreciated in the literature. We focus on mixing functions, some of which have not been properly calculated in the literature. In general, mixing is nonisometric nonzero volume of mixing and the entropy of mixing is nonideal. We have recently developed a lattice theory for the general model, which goes beyond the random mixing approximation RMA limit and is thermodynamically consistent in the entire parameter space. The theory contains terms that do not have a continuum analog except in the RMA limit or for pointlike particles. Both the free volume and the total volume determine the thermodynamics of the system. The RMA limit of our theory gives rise to a new theory, which can be taken as the extension of the conventional incompressible FloryHuggins theory and is similar in simplicity. Using our complete theory, we calculate the effects of size disparity and interactions on the thermodynamics of the model. Cohesive energies are not constant in general. Nonisometry can make the energy of mixing negative, even when all exchange interactions are repulsive. Consequently, ScatchardHildebrand theory cannot be substantiated in general. Various unusual features are noted and discussed.
Higher gauge theory and a nonAbelian generalization of 2form electrodynamics ; In conventional gauge theory, a charged point particle is described by a representation of the gauge group. If we propagate the particle along some path, the parallel transport of the gauge connection acts on this representation. The Lagrangian density of the gauge field depends on the curvature of the connection which can be calculated from the holonomy around infinitesimal loops. For Abelian symmetry groups, say GU1, there exists a generalization, known as pform electrodynamics, in which p1dimensional charged objects can be propagated along psurfaces and in which the Lagrangian depends on a generalized curvature associated with infinitesimal closed psurfaces. In this article, we use Lie 2groups and ideas from higher category theory in order to formulate a discrete gauge theory which generalizes these models at the level p2 to possibly nonAbelian symmetry groups. An important feature of our model is that it involves both parallel transports along paths and generalized transports along surfaces with a nontrivial interplay of these two types of variables. Our main result is the geometric picture, namely the assignment of nonAbelian quantities to geometrical objects in a coordinate free way. We construct the precise assignment of variables to the curves and surfaces, the generalized local symmetries and gauge invariant actions and we clarify which structures can be nonAbelian and which others are always Abelian. A discrete version of connections on nonAbelian gerbes is a special case of our construction. Even though the motivation sketched so far suggests applications mainly in string theory, the model presented here is also related to spin foam models of quantum gravity and may in addition provide some insight into the role of centre monopoles and vortices in lattice QCD.
Exact Master Equation and Quantum Decoherence of Two Coupled Harmonic Oscillators in a General Environment ; In this paper we derive an exact master equation for two coupled quantum harmonic oscillators interacting via bilinear coupling with a common environment at arbitrary temperature made up of many harmonic oscillators with a general spectral density function. We first show a simple derivation based on the observation that the twoharmonic oscillator model can be effectively mapped into that of a single harmonic oscillator in a general environment plus a free harmonic oscillator. Since the exact one harmonic oscillator master equation is available Hu, Paz and Zhang, Phys. Rev. D textbf45, 2843 1992, the exact master equation with all its coefficients for this two harmonic oscillator model can be easily deduced from the known results of the single harmonic oscillator case. In the second part we give an influence functional treatment of this model and provide explicit expressions for the evolutionary operator of the reduced density matrix which are useful for the study of decoherence and disentanglement issues. We show three applications of this master equation on the decoherence and disentanglement of two harmonic oscillators due to their interaction with a common environment under Markovian approximation, and a derivation of the uncertainty principle at finite temperature for a composite object, modeled by two interacting harmonic oscillators. The exact master equation for two, and its generalization to N, harmonic oscillators interacting with a general environment are expected to be useful for the analysis of quantum coherence, entanglement, fluctuations and dissipation of mesoscopic objects towards the construction of a theoretical framework for macroscopic quantum phenomena.
Steepest Entropy Ascent Model for FarNonEquilibrium Thermodynamics. Unified Implementation of the Maximum Entropy Production Principle ; By suitable reformulations, we cast the mathematical frameworks of several wellknown different approaches to the description of nonequilibrium dynamics into a unified formulation, which extends to such frameworks the concept of Steepest Entropy Ascent SEA dynamics introduced by the present author in previous works on quantum thermodynamics. The present formulation constitutes a generalization also for the quantum thermodynamics framework. In the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near equilibrium limit, the metric tensor is related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits a spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. Nonnegativity of the entropy production is a readily proved general feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the socalled Maximum Entropy Production Principle. It is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium states.
Attractive Hubbard model with disorder and the generalized Anderson theorem ; Using the generalized DMFTSigma approach we have studied disorder influence on singleparticle properties of the normal phase and superconducting transition temperature in attractive Hubbard model. The wide range of attractive potentials U was studied from the weak coupling region, where both the instability of the normal phase and superconductivity are well described by BCS model, towards the strong coupling region, where superconducting transition is due to BoseEinstein condensation BEC of compact Cooper pairs, formed at temperatures much higher than the temperature of superconducting transition. We have studied two typical models of conduction band with semielliptic and flat densities of states, appropriate for threedimensional and twodimensional systems respectively. For semielliptic density of states disorder influence on all singleparticle properties e.g. density of states is universal for arbitrary strength of electronic correlations and disorder and is due only to the general disorder widening of conduction band. In the case of flat density of states universality is absent in general case, but still the disorder influence is due mainly to band widening and universal behavior is restored for large enough disorder. Using the combination of DMFTSigma and Nozieres SchmittRink approximations we have studied disorder influence upon superconducting transition temperature Tc for the range of characteristic values of U and disorder, including the BCSBEC crossover region and the limit of strong coupling. Disorder can either suppress Tc in the weak coupling region or significantly increase Tc in strong coupling region. However in all cases the generalized Anderson theorem is valid and all changes of superconducting critical temperature are essentially due only to the general disorder widening of the conduction band.
Person ReIdentification by Camera Correlation Aware Feature Augmentation ; The challenge of person reidentification reid is to match individual images of the same person captured by different nonoverlapping camera views against significant and unknown crossview feature distortion. While a large number of distance metricsubspace learning models have been developed for reid, the crossview transformations they learned are viewgeneric and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning viewspecific feature transformations for reid i.e., viewspecific reid, an understudied approach, becomes an alternative resort for this problem. In this work, we formulate a novel viewspecific person reidentification framework from the feature augmentation point of view, called Camera coRrelation Aware Feature augmenTation CRAFT. Specifically, CRAFT performs crossview adaptation by automatically measuring camera correlation from crossview visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, viewgeneric learning algorithms can be readily generalized to learn and optimize viewspecific submodels whilst simultaneously modelling viewgeneric discrimination information. Therefore, our framework not only inherits the strength of viewgeneric model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn viewspecific feature transformations for person reid across a large network with more than two cameras, a largely underinvestigated but realistic reid setting. Additionally, we present a domaingeneric deep person appearance representation which is designed particularly to be towards view invariant for facilitating crossview adaptation by CRAFT.
When Unseen Domain Generalization is Unnecessary Rethinking Data Augmentation ; Recent advances in deep learning for medical image segmentation demonstrate expertlevel accuracy. However, in clinically realistic environments, such methods have marginal performance due to differences in image domains, including different imaging protocols, device vendors and patient populations. Here we consider the problem of domain generalization, when a model is trained once, and its performance generalizes to unseen domains. Intuitively, within a specific medical imaging modality the domain differences are smaller relative to natural images domain variability. We rethink data augmentation for medical 3D images and propose a deep stacked transformations DST approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image in each minibatch during network training to account for the contribution of domainspecific shifts in medical images. We comprehensively evaluate our method on three tasks segmentation of whole prostate from 3D MRI, left atrial from 3D MRI, and left ventricle from 3D ultrasound. We demonstrate that when trained on a small source dataset, i on average, DST models on unseen datasets degrade only by 11 Dice score change, compared to the conventional augmentation degrading 39 and CycleGANbased domain adaptation method degrading 25; ii when evaluation on the same domain, DST is also better albeit only marginally. iii When training on largesized data, DST on unseen domains reaches performance of stateoftheart fully supervised models. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of robust deep segmentation models for clinical deployment.
Resolution enhancement and realistic speckle recovery with generative adversarial modeling of microoptical coherence tomography ; A resolution enhancement technique for optical coherence tomography OCT, based on Generative Adversarial Networks GANs, was developed and investigated. GANs have been previously used for resolution enhancement of photography and optical microscopy images. We have adapted and improved this technique for OCT image generation. Conditional GANs cGANs were trained on a novel set of ultrahigh resolution spectral domain OCT volumes, termed microOCT, as the highresolution ground truth 1mum isotropic resolution. The ground truth was paired with a lowresolution image obtained by synthetically degrading resolution 4x in one of 1D or both axial and lateral axes 2D. Crosssectional image Bscan volumes obtained from in vivo imaging of human labial lip tissue and mouse skin were used in separate feasibility experiments. Accuracy of resolution enhancement compared to ground truth was quantified with human perceptual accuracy tests performed by an OCT expert. The GAN loss in the optimization objective, noise injection in both the generator and discriminator models, and multiscale discrimination were found to be important for achieving realistic speckle appearance in the generated OCT images. The utility of high resolution speckle recovery was illustrated by an example of microOCT imaging of blood vessels in lip tissue. Qualitative examples applying the models to image data from outside of the training data distribution, namely human retina and mouse bladder, were also demonstrated, suggesting potential for crossdomain transferability. This preliminary study suggests that deep learning generative models trained on OCT images from highperformance prototype systems may have potential in enhancing lower resolution data from mainstreamcommercial systems, thereby bringing cuttingedge technology to the masses at low cost.
Generalrelativistic spin system ; The models of spin systems defined on Euclidean space provide powerful machinery for studying a broad range of condensed matter phenomena. While the nonrelativistic effective description is sufficient for most of the applications, it is interesting to consider special and general relativistic extensions of such models. Here, we introduce a framework that allows us to construct theories of continuous spin variables on a curved spacetime. Our approach takes advantage of the results of the nonlinear field space theory, which shows how to construct compact phase space models, in particular for the spherical phase space of spin. Following the methodology corresponding to a bosonization of spin systems into the spin wave representations, we postulate a representation having the form of the KleinGordon field. This representation is equivalent to the semiclassical version of the wellknown HolsteinPrimakoff transformation. The generalrelativistic extension of the spin wave representation is then performed, leading to the generalrelativistically motivated modifications of the Ising model coupled to a transversal magnetic field. The advantage of our approach is its offshell construction, while the popular methods of coupling fermions to general relativity usually depend on the form of Einstein field equations with matter. Furthermore, we show equivalence between the considered spin system and the DiracBornInfeld type scalar field theory with a specific potential, which is also an example of kessence theory. Based on this, the cosmological consequences of the introduced spin field matter content are preliminarily investigated.
SinGANSeg Synthetic training data generation for medical image segmentation ; Analyzing medical data to find abnormalities is a timeconsuming and costly task, particularly for rare abnormalities, requiring tremendous efforts from medical experts. Artificial intelligence has become a popular tool for the automatic processing of medical data, acting as a supportive tool for doctors. However, the machine learning models used to build these tools are highly dependent on the data used to train them. Large amounts of data can be difficult to obtain in medicine due to privacy, expensive and timeconsuming annotations, and a general lack of data samples for infrequent lesions. Here, we present a novel synthetic data generation pipeline, called SinGANSeg, to produce synthetic medical images with corresponding masks using a single training image. Our method is different from the traditional GANs because our model needs only a single image and the corresponding ground truth to train. Our method produces alternative artificial segmentation datasets with ground truth masks when real datasets are not allowed to share. The pipeline is evaluated using qualitative and quantitative comparisons between real and synthetic data to show that the style transfer technique used in our pipeline significantly improves the quality of the generated data and our method is better than other stateoftheart GANs to prepare synthetic images when the size of training datasets are limited. By training UNet using both real and the synthetic data generated from the SinGANSeg pipeline, we show that models trained with synthetic data have very close performances to those trained on real data when the datasets have a considerable amount of data. In contrast, Synthetic data generated from the SinGANSeg pipeline can improve the performance of segmentation models when training datasets do not have a considerable amount of data. The code is available on GitHub.
Chisquared Amplification Identifying Hidden Hubs ; We consider the following general hidden hubs model an n times n random matrix A with a subset S of k special rows hubs entries in rows outside S are generated from the probability distribution p0 sim N0,sigma02; for each row in S, some k of its entries are generated from p1 sim N0,sigma12, sigma1sigma0, and the rest of the entries from p0. The problem is to identify the highdegree hubs efficiently. This model includes and significantly generalizes the planted Gaussian Submatrix Model, where the special entries are all in a k times k submatrix. There are two wellknown barriers if kgeq csqrtnln n, just the row sums are sufficient to find S in the general model. For the submatrix problem, this can be improved by a sqrtln n factor to k ge csqrtn by spectral methods or combinatorial methods. In the variant with p0pm 1 with probability 12 each and p1equiv 1, neither barrier has been broken. We give a polynomialtime algorithm to identify all the hidden hubs with high probability for k ge n0.5delta for some delta 0, when sigma122sigma02. The algorithm extends to the setting where planted entries might have different variances each at least as large as sigma12. We also show a nearly matching lower bound for sigma12 le 2sigma02, there is no polynomialtime Statistical Query algorithm for distinguishing between a matrix whose entries are all from N0,sigma02 and a matrix with kn0.5delta hidden hubs for any delta 0. The lower bound as well as the algorithm are related to whether the chisquared distance of the two distributions diverges. At the critical value sigma122sigma02, we show that the general hidden hubs problem can be solved for kgeq csqrt nln n14, improving on the naive row sumbased method.
Synthetic turbulent inflow generator using machine learning ; We propose a methodology for generating timedependent turbulent inflow data with the aid of machine learning ML, which has a possibility to replace conventional driver simulations or synthetic turbulent inflow generators. As for the ML model, we use an autoencoder type convolutional neural network CNN with a multilayer perceptron MLP. For the test case, we study a fullydeveloped turbulent channel flow at the friction Reynolds number of rm Retau 180 for easiness of assessment. The ML models are trained using a time series of instantaneous velocity fields in a single crosssection obtained by direct numerical simulation DNS so as to output the crosssectional velocity field at a specified future time instant. From the a priori test in which the output from the trained ML model are recycled to the input, the spatiotemporal evolution of crosssectional structure is found to be reasonably well reproduced by the proposed method. The turbulence statistics obtained in the a priori test are also, in general, in reasonable agreement with the DNS data, although some deviation in the flow rate was found. It is also found that the present machinelearned inflow generator is free from the spurious periodicity, unlike the conventional driver DNS in a periodic domain. As an a posteriori test, we perform DNS of inflowoutflow turbulent channel flow with the trained ML model used as a machinelearned turbulent inflow generator MLTG at the inlet. It is shown that the present MLTG can maintain the turbulent channel flow for a long time period sufficient to accumulate turbulent statistics, with much lower computational cost than the corresponding driver simulation. It is also demonstrated that we can obtain accurate turbulent statistics by properly correcting the deviation in the flow rate.
Variational Nested Dropout ; Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the predefined importance during training. It has been explored for I. Constructing nested nets the nested nets are neural networks whose architectures can be adjusted instantly during testing time, e.g., based on computational constraints. The nested dropout implicitly ranks the network parameters, generating a set of subnetworks such that any smaller subnetwork forms the basis of a larger one. II. Learning ordered representation the nested dropout applied to the latent representation of a generative model e.g., autoencoder ranks the features, enforcing explicit order of the dense representation over dimensions. However, the dropout rate is fixed as a hyperparameter during the whole training process. For nested nets, when network parameters are removed, the performance decays in a humanspecified trajectory rather than in a trajectory learned from data. For generative models, the importance of features is specified as a constant vector, restraining the flexibility of representation learning. To address the problem, we focus on the probabilistic counterpart of the nested dropout. We propose a variational nested dropout VND operation that draws samples of multidimensional ordered masks at a low cost, providing useful gradients to the parameters of nested dropout. Based on this approach, we design a Bayesian nested neural network that learns the order knowledge of the parameter distributions. We further exploit the VND under different generative models for learning ordered latent distributions. In experiments, we show that the proposed approach outperforms the nested network in terms of accuracy, calibration, and outofdomain detection in classification tasks. It also outperforms the related generative models on data generation tasks.
ERNIETiny A Progressive Distillation Framework for Pretrained Transformer Compression ; Pretrained language models PLMs such as BERT adopt a training paradigm which first pretrain the model in general data and then finetune the model on taskspecific data, and have recently achieved great success. However, PLMs are notorious for their enormous parameters and hard to be deployed on reallife applications. Knowledge distillation has been prevailing to address this problem by transferring knowledge from a large teacher to a much smaller student over a set of data. We argue that the selection of thee three key components, namely teacher, training data, and learning objective, is crucial to the effectiveness of distillation. We, therefore, propose a fourstage progressive distillation framework ERNIETiny to compress PLM, which varies the three components gradually from general level to taskspecific level. Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss. Then, GeneralEnhanced Distillation changes teacher model from pretrained teacher to finetuned teacher. After that, TaskAdaptive Distillation shifts training data from general data to taskspecific data. In the end, TaskSpecific Distillation, adds two additional losses, namely SoftLabel and HardLabel loss onto the last stage. Empirical results demonstrate the effectiveness of our framework and generalization gain brought by ERNIETiny.In particular, experiments show that a 4layer ERNIETiny maintains over 98.0performance of its 12layer teacher BERT base on GLUE benchmark, surpassing stateoftheart SOTA by 1.0 GLUE score with the same amount of parameters. Moreover, ERNIETiny achieves a new compression SOTA on five Chinese NLP tasks, outperforming BERT base by 0.4 accuracy with 7.5x fewer parameters and9.4x faster inference speed.
AutoScoreSurvival Developing interpretable machine learningbased timetoevent scores with rightcensored survival data ; Scoring systems are highly interpretable and widely used to evaluate timetoevent outcomes in healthcare research. However, existing timetoevent scores are predominantly created adhoc using a few manually selected variables based on clinician's knowledge, suggesting an unmet need for a robust and efficient generic scoregenerating method. AutoScore was previously developed as an interpretable machine learning score generator, integrated both machine learning and pointbased scores in the strong discriminability and accessibility. We have further extended it to timetoevent data and developed AutoScoreSurvival, for automatically generating timetoevent scores with rightcensored survival data. Random survival forest provides an efficient solution for selecting variables, and Cox regression was used for score weighting. We illustrated our method in a reallife study of 90day mortality of patients in intensive care units and compared its performance with survival models i.e., Cox and the random survival forest. The AutoScoreSurvivalderived scoring model was more parsimonious than survival models built using traditional variable selection methods e.g., penalized likelihood approach and stepwise variable selection, and its performance was comparable to survival models using the same set of variables. Although AutoScoreSurvival achieved a comparable integrated area under the curve of 0.782 95 CI 0.7670.794, the integervalued timetoevent scores generated are favorable in clinical applications because they are easier to compute and interpret. Our proposed AutoScoreSurvival provides an automated, robust and easytouse machine learningbased clinical score generator to studies of timetoevent outcomes. It provides a systematic guideline to facilitate the future development of timetoevent scores for clinical applications.
An Experimental Evaluation on Deepfake Detection using Deep Face Recognition ; Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using twoclass convolutional neural networks CNNs. These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging CelebDF and FaceForensics deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over twoclass CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve AUC of 0.98 and an Equal Error Rate EER of 7.1 in detecting deepfakes using face recognition on the CelebDF dataset. This EER is lower by 16.6 compared to the EER obtained for the twoclass CNN and the ocular modality on the CelebDF dataset. Further on the FaceForensics dataset, an AUC of 0.99 and EER of 2.04 were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.