text
stringlengths
62
2.94k
Phase Transitions and a Model Order Selection Criterion for Spectral Graph Clustering ; One of the longstanding open problems in spectral graph clustering SGC is the socalled model order selection problem automated selection of the correct number of clusters. This is equivalent to the problem of finding the number of connected components or communities in an undirected graph. We propose automated model order selection AMOS, a solution to the SGC model selection problem under a random interconnection model RIM using a novel selection criterion that is based on an asymptotic phase transition analysis. AMOS can more generally be applied to discovering hidden block diagonal structure in symmetric nonnegative matrices. Numerical experiments on simulated graphs validate the phase transition analysis, and realworld network data is used to validate the performance of the proposed model selection procedure.
Discontinuous phase transition in an annealed multistate majorityvote model ; In this paper, we generalize the original majorityvote MV model with noise from two states to arbitrary q states, where q is an integer no less than two. The main emphasis is paid to the comparison on the nature of phase transitions between the twostate MV MV2 model and the threestate MV MV3 model. By extensive Monte Carlo simulation and meanfield analysis, we find that the MV3 model undergoes a discontinuous orderdisorder phase transition, in contrast to a continuous phase transition in the MV2 model. A central feature of such a discontinuous transition is a strong hysteresis behavior as noise intensity goes forward and backward. Within the hysteresis region, the disordered phase and ordered phase are coexisting.
Multistage Robust Unit Commitment with Dynamic Uncertainty Sets and Energy Storage ; The deep penetration of wind and solar power is a critical component of the future power grid. However, the intermittency and stochasticity of these renewable resources bring significant challenges to the reliable and economic operation of power systems. Motivated by these challenges, we present a multistage adaptive robust optimization model for the unit commitment UC problem, which models the sequential nature of the dispatch process and utilizes a new type of dynamic uncertainty sets to capture the temporal and spatial correlations of wind and solar power. The model also considers the operation of energy storage devices. We propose a simplified and effective affine policy for dispatch decisions, and develop an efficient algorithmic framework using a combination of constraint generation and duality based reformulation with various improvements. Extensive computational experiments show that the proposed method can efficiently solve multistage robust UC problems on the Polish 2736bus system under high dimensional uncertainty of 60 wind farms and 30 solar farms. The computational results also suggest that the proposed model leads to significant benefits in both costs and reliability over robust models with traditional uncertainty sets as well as deterministic models with reserve rules.
Hybrid copula mixed models for combining casecontrol and cohort studies in metaanalysis of diagnostic tests ; Copula mixed models for trivariate or bivariate metaanalysis of diagnostic test accuracy studies accounting or not for disease prevalence have been proposed in the biostatistics literature to synthesize information. However, many systematic reviews often include casecontrol and cohort studies, so one can either focus on the bivariate metaanalysis of the case control studies or the trivariate metaanalysis of the cohort studies, as only the latter contains information on disease prevalence. In order to remedy this situation of wasting data we propose a hybrid copula mixed model via a combination of the bivariate and trivariate copula mixed model for the data from the casecontrol studies and cohort studies, respectively. Hence, this hybrid model can account for study design and also due its generality can deal with dependence in the joint tails. We apply the proposed hybrid copula mixed model to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma.
Approximate Bayesian Computation and Model Validation for Repulsive Spatial Point Processes ; In many applications involving spatial point patterns, we find evidence of inhibition or repulsion. The most commonly used class of models for such settings are the Gibbs point processes. A recent alternative, at least to the statistical community, is the determinantal point process. Here, we examine model fitting and inference for both of these classes of processes in a Bayesian framework. While usual MCMC model fitting can be available, the algorithms are complex and are not always well behaved. We propose using approximate Bayesian computation ABC for such fitting. This approach becomes attractive because, though likelihoods are very challenging to work with for these processes, generation of realizations given parameter values is relatively straightforward. As a result, the ABC fitting approach is wellsuited for these models. In addition, such simulation makes them wellsuited for posterior predictive inference as well as for model assessment. We provide details for all of the above along with some simulation investigation and an illustrative analysis of a point pattern of tree data exhibiting repulsion. Rcode and datasets are included in the supplementary material.
Hierarchical QuestionImage CoAttention for Visual Question Answering ; A number of recent works have proposed attention models for Visual Question Answering VQA that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling where to look or visual attention, it is equally important to model what words to listen to or question attention. We present a novel coattention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question and consequently the image via the coattention mechanism in a hierarchical fashion via a novel 1dimensional convolution neural networks CNN. Our model improves the stateoftheart on the VQA dataset from 60.3 to 60.5, and from 61.6 to 63.3 on the COCOQA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCOQA.
Generalized Permutohedra from Probabilistic Graphical Models ; A graphical model encodes conditional independence relations via the Markov properties. For an undirected graph these conditional independence relations can be represented by a simple polytope known as the graph associahedron, which can be constructed as a Minkowski sum of standard simplices. There is an analogous polytope for conditional independence relations coming from a regular Gaussian model, and it can be defined using multiinformation or relative entropy. For directed acyclic graphical models and also for mixed graphical models containing undirected, directed and bidirected edges, we give a construction of this polytope, up to equivalence of normal fans, as a Minkowski sum of matroid polytopes. Finally, we apply this geometric insight to construct a new orderingbased search algorithm for causal inference via directed acyclic graphical models.
Performance Analysis of Sparse Recovery Models for Bad Data Detection and State Estimation in Electric Power Networks ; This paper investigates the sparse recovery models for bad data detection and state estimation in power networks. Two sparse models, the sparse L1relaxation model L1R and the multistage convex relaxation model CappedL1, are compared with the weighted least absolute value WLAV in the aspects of the bad data processing capacity and the computational efficiency. Numerical tests are conducted on power systems with linear and nonlinear measurements. Based on numerical tests, the paper evaluates the performance of these robust state estimation models. Furthermore, suggestion on how to select parameter of sparse recovery models is also given when they are used in electric power networks.
Stochastic stability analysis of a reduced galactic dynamo model with perturbed effect ; We investigate the asymptotic behaviour of a reduced alphaOmegadynamo model of magnetic field generation in spiral galaxies where fluctuation in the alphaeffect results in a system with statedependent stochastic perturbations. By computing the upper Lyapunov exponent of the linearised model, we can identify regions of instability and stability in probability for the equilibrium of the nonlinear model; in this case the equilibrium solution corresponds to a magnetic field that has undergone catastrophic quenching. These regions are compared to regions of exponential meansquare stability and regions of sub and supercriticality in the unperturbed linearised model. Prior analysis in the literature which focuses on these latter regions does not adequately address the corresponding transition in the nonlinear stochastic model. Finally we provide a visual representation of the influence of drift nonnormality and perturbation intensity on these regions.
MetaboTools A comprehensive toolbox for analysis of genomescale metabolic models ; Metabolomic data sets provide a direct readout of cellular phenotypes and are increasingly generated to study biological questions. Our previous work revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraintbased modeling. Through this work, which consists of a protocol, a toolbox, and tutorials of two use cases, we make our methods available to the broader scientific community. The protocol describes, in a stepwise manner, the workflow of data integration and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorials explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, this protocol constitutes a comprehensive guide to the intramodel analysis of extracellular metabolomic data and a resource offering a broad set of computational analysis tools for a wide biomedical and nonbiomedical research community.
Nontrivial UV behavior of rank4 tensor field models for quantum gravity ; We investigate the universality classes of rank4 colored bipartite U1 tensor field models near the Gaussian fixed point with the functional renormalization group. In a truncation that contains all power counting relevant and marginal operators, we find a onedimensional UV attractor that is connected with the Gaussian fixed point. Hence this is first evidence that the model could be asymptotically safe due to a mechanism similar to the one found in the GrosseWulkenhaar model, whose UV behavior near the Gaussian fixed point is also described by onedimensional attractor that contains the Gaussian fixed point. However, the cancellation mechanism that is responsible for the simultaneous vanishing of the beta functions is new to tensor models, i.e. it does not occur in vector or matrix models.
A SequencetoSequence Model for User Simulation in Spoken Dialogue Systems ; User simulation is essential for generating enough data to train a statistical spoken dialogue system. Previous models for user simulation suffer from several drawbacks, such as the inability to take dialogue history into account, the need of rigid structure to ensure coherent user behaviour, heavy dependence on a specific domain, the inability to output several user intentions during one dialogue turn, or the requirement of a summarized action space for tractability. This paper introduces a datadriven user simulator based on an encoderdecoder recurrent neural network. The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions. The dialogue contexts include information about the machine acts and the status of the user goal. We show on the Dialogue State Tracking Challenge 2 DSTC2 dataset that the sequencetosequence model outperforms an agendabased simulator and an ngram simulator, according to Fscore. Furthermore, we show how this model can be used on the original action space and thereby models user behaviour with finer granularity.
Temporal Topic Analysis with Endogenous and Exogenous Processes ; We consider the problem of modeling temporal textual data taking endogenous and exogenous processes into account. Such text documents arise in real world applications, including job advertisements and economic news articles, which are influenced by the fluctuations of the general economy. We propose a hierarchical Bayesian topic model which imposes a groupcorrelated hierarchical structure on the evolution of topics over time incorporating both processes, and show that this model can be estimated from Markov chain Monte Carlo sampling methods. We further demonstrate that this model captures the intrinsic relationships between the topic distribution and the timedependent factors, and compare its performance with latent Dirichlet allocation LDA and two other related models. The model is applied to two collections of documents to illustrate its empirical performance online job advertisements from DirectEmployers Association and journalists' postings on BusinessInsider.com.
Fundamental partial compositeness ; We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Under certain assumptions on the dynamics of the scalars, successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, righthanded SM fermions have an SU2Rlike structure, yielding a custodiallyprotected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.
Scalar field dark energy with a minimal coupling in a spherically symmetric background ; Dark energy models and modified gravity theories have been actively studied and the behaviors in the solar system have been also carefully investigated in a part of the models. However, the isotropic solutions of the field equations in the simple models of dark energy, e.g. quintessence model without matter coupling, have not been well investigated. One of the reason would be the nonlinearity of the field equations. In this paper, a method to evaluate the solution of the field equations is constructed, and it is shown that there is a model that can easily pass the solar system tests, whereas, there is also a model that is constrained from the solar system tests.
Generalizing The Mean Spherical Approximation as a Multiscale, Nonlinear Boundary Condition at the SoluteSolvent Interface ; In this paper we extend the familiar continuum electrostatic model with a perturbation to the usual macroscopic boundary condition. The perturbation is based on the mean spherical approximation MSA, to derive a multiscale hydrationshell boundary condition HSBC. We show that the HSBCMSA model reproduces MSA predictions for Born ions in a variety of polar solvents, including both protic and aprotic solvents. Importantly, the HSBCMSA model predicts not only solvation free energies accurately but also solvation entropies, which standard continuum electrostatic models fail to predict. The HSBCMSA model depends only on the normal electric field at the dielectric boundary, similar to our recent development of an HSBC model for chargesign hydration asymmetry, and the reformulation of the MSA as a boundary condition enables its straightforward application to complex molecules such as proteins.
Neural Discourse Modeling of Conversations ; Deep neural networks have shown recent promise in many languagerelated tasks such as the modeling of conversations. We extend RNNbased sequence to sequence models to capture the long range discourse across many turns of conversation. We perform a sensitivity analysis on how much additional context affects performance, and provide quantitative and qualitative evidence that these models are able to capture discourse relationships across multiple utterances. Our results quantifies how adding an additional RNN layer for modeling discourse improves the quality of output utterances and providing more of the previous conversation as input also improves performance. By searching the generated outputs for specific discourse markers we show how neural discourse models can exhibit increased coherence and cohesion in conversations.
Qubit Transport Model for Unitary Black Hole Evaporation without Firewalls ; We give an explicit toy qubit transport model for transferring information from the gravitational field of a black hole to the Hawking radiation by a continuous unitary transformation of the outgoing radiation and the black hole gravitational field. The model has no firewalls or other drama at the event horizon, and it avoids a counterargument that has been raised for subsystem transfer models as resolutions of the firewall paradox. Furthermore, it fits the set of six physical constraints that Giddings has proposed for models of black hole evaporation. It does utilize nonlocal qubits for the gravitational field but assumes that the radiation interacts locally with these nonlocal qubits, so in some sense the nonlocality is confined to the gravitational sector. Although the qubit model is too crude to be quantitively correct for the detailed spectrum of Hawking radiation, it fits qualitatively with what is expected.
SetConsensus Collections are Decidable ; A natural way to measure the power of a distributedcomputing model is to characterize the set of tasks that can be solved in it. the model. In general, however, the question of whether a given task can be solved in a given model is undecidable, even if we only consider the waitfree sharedmemory model. In this paper, we address this question for restricted classes of models and tasks. We show that the question of whether a collection C of emphell,jset consensus objects, for various ell the number of processes that can invoke the object and j the number of distinct outputs the object returns, can be used by n processes to solve waitfree kset consensus is decidable. Moreover, we provide a simple On2 decision algorithm, based on a dynamic programming solution to the Knapsack optimization problem. We then present an emphadaptive waitfree setconsensus algorithm that, for each set of participating processes, achieves the best level of agreement that is possible to achieve using C. Overall, this gives us a complete characterization of a readwrite model defined by a collection of setconsensus objects through its emphsetconsensus power.
Spherical collapse model and cluster number counts in power law fT gravity ; We study the spherical collapse model SCM in the framework of spatially flat power law fT propto Tb gravity model. We find that the linear and nonlinear growth of spherical overdensities of this particular fT model are affected by the powerlaw parameter b. Finally, we compute the predicted number counts of virialized haloes in order to distinguish the current fT model from the expectations of the concordance Lambda cosmology. Specifically, the present analysis suggests that the fT gravity model with positive negative b predicts more less virialized objects with respect to those of LambdaCDM.
Deep Markov Random Field for Image Modeling ; Markov Random Fields MRFs, a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fullyconnected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feedforward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feedforward approximation also allows it to be efficiently learned from data. Experimental results on a variety of lowlevel vision tasks show notable improvement over stateofthearts.
Sampling BSSRDFs with nonperpendicular incidence ; Subsurface scattering is key to our perception of translucent materials. Models based on diffusion theory are used to render such materials in a realistic manner by evaluating an approximation of the material BSSRDF at any two points of the surface. Under the assumption of perpendicular incidence, this BSSRDF approximation can be tabulated over 2 dimensions to provide fast evaluation and importance sampling. However, accounting for nonperpendicular incidence with the same approach would require to tabulate over 4 dimensions, making the model too large for practical applications. In this report, we present a method to efficiently evaluate and importance sample the multiscattering component of diffusion based BSSRDFs for nonperpendicular incidence. Our approach is based on tabulating a compressed angular model of Photon Beam Diffusion. We explain how to generate, evaluate and sample our model. We show that 1 MiB is enough to store a model of the multiscattering BSSRDF that is within 0.5 relative error of Photon Beam Diffusion. Finally, we present a method to use our model in a Monte Carlo particle tracer and show results of our implementation in PBRT.
A model with interaction of dark components and recent observational data ; In the proposed model with interaction between dark energy and dark matter, we consider cosmological scenarios with different equations of state wd for dark energy. For both constant and variable equation of state, we analyze solutions for dark energy and dark matter in seven variants of the model. We investigate exact analytic solutions for wd constant equation of state, and several variants of the model for variable wd. These scenarios are tested with the current astronomical data from Type Ia Supernovae, baryon acoustic oscillations, Hubble parameter H z and the cosmic microwave background radiation. Finally, we make a statistical comparison of our interacting model with LambdaCDM as well as with some other well known noninteracting cosmological models.
RANS solver for microscale pollution dispersion problems in areas with vegetation Development and validation ; We present a description and validation of a finite volume solver aimed at solving the problems of microscale urban flows where vegetation is present. The solver is based on the five equation system of Reynoldsaveraged NavierStokes equations for atmospheric boundary layer flows, which are complemented by the kepsilon turbulence model. The vegetation is modelled as a porous zone, and the effects of the vegetation are included in the momentum and turbulence equations. A detailed dry deposition model is incorporated in the pollutant transport equation, allowing the investigation of the filtering properties of urban vegetation. The solver is validated on four test cases to assess the components of the model the flow and pollutant dispersion around the 2D hill, the temporal evolution of the rising thermal bubble, the flow through and around the forest canopy, and a hedgerow filtering the particleladen flow. Generally good agreement with the measured values or previously computed numerical solution is observed, although some deficiencies of the model are identified. These are related to the chosen turbulence model and to the uncertainties of the vegetation properties.
CharacterLevel Language Modeling with Hierarchical Recurrent Neural Networks ; Recurrent neural network RNN based characterlevel language models CLMs are extremely useful for modeling outofvocabulary words by nature. However, their performance is generally much worse than the wordlevel language models WLMs, since CLMs need to consider longer history of tokens to properly predict the next one. We address this problem by proposing hierarchical RNN architectures, which consist of multiple modules with different timescales. Despite the multitimescale structures, the input and output layers operate with the characterlevel clock, which allows the existing RNN CLM training approaches to be directly applicable without any modifications. Our CLM models show better perplexity than KneserNey KN 5gram WLMs on the One Billion Word Benchmark with only 2 of parameters. Also, we present realtime characterlevel endtoend speech recognition examples on the Wall Street Journal WSJ corpus, where replacing traditional monoclock RNN CLMs with the proposed models results in better recognition accuracies even though the number of parameters are reduced to 30.
Understanding Convolutional Neural Networks with A Mathematical Model ; This work attempts to address two fundamental questions about the structure of the convolutional neural networks CNN 1 why a nonlinear activation function is essential at the filter output of every convolutional layer 2 what is the advantage of the twolayer cascade system over the onelayer system A mathematical model called the REctifiedCOrrelations on a Sphere RECOS is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns or the spectral components. The necessity of rectification is explained using the RECOS model. Then, the behavior of a twolayer RECOS system is analyzed and compared with its onelayer counterpart. The LeNet5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example. Keywords Convolutional Neural Network CNN, Nonlinear Activation, RECOS Model, Rectified Linear Unit ReLU, MNIST Dataset.
Multivariate Garch with dynamic beta ; We investigate a solution for the problems related to the application of multivariate GARCH models to markets with a large number of stocks by restricting the form of the conditional covariance matrix. The model is a factor model and uses only six free GARCH parameters. One factor can be interpreted as the market component, the remaining factors are equal. This allow the analytical calculation of the inverse covariance matrix. The timedependence of the factors enables the determination of dynamical beta coefficients. We compare the results from our model with the results of other GARCH models for the daily returns from the SP500 market and find that they are competitive. As applications we use the daily values of beta coefficients to confirm a transition of the market in 2006. Furthermore we discuss the relationship of our model with the leverage effect.
Spatial localization and pattern formation in discrete optomechanical cavities and arrays ; We investigate theoretically the generation of nonlinear dissipative structures in optomechanical OM systems containing discrete arrays of mechanical resonators. We consider both hybrid models in which the optical system is a continuous multimode field, as it would happen in an OM cavity containing an array of micromirrors, and also fully discrete models in which each mechanical resonator interacts with a single optical mode, making contact with Ludwig Marquardt Phys. Rev. Lett. 101, 073603 2013. Also, we study the connections between both types of models and continuous OM models. While all three types of models merge naturally in the limit of a large number of densely distributed mechanical resonators, we show that the spatial localization and the pattern formation found in continuous OM models can be still observed for a small number of mechanical elements, even in the presence of finitesize effects, which we discuss. This opens new venues for experimental approaches to the subject.
A generalized twocomponent model of solar wind turbulence and ab initio diffusion mean free paths and drift lengthscales of cosmic rays ; We extend a twocomponent model for the evolution of fluctuations in the solar wind plasma so that it is fully threedimensional 3D and also coupled selfconsistently to the largescale magnetohydrodynamic MHD equations describing the background solar wind. The two classes of fluctuations considered are a highfrequency parallelpropagating wavelike piece and a lowfrequency quasitwodimensional component. For both components, the nonlinear dynamics is dominanted by quasiperpendicular spectral cascades of energy. Driving of the fluctuations, by, for example, velocity shear and pickup ions, is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results from the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wavelike and quasitwodimensional fluctuations are used to calculate ab initio diffusion mean free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.
Optimal discrimination designs for semiparametric models ; Much of the work in the literature on optimal discrimination designs assumes that the models of interest are fully specified, apart from unknown parameters in some models. Recent work allows errors in the models to be nonnormally distributed but still requires the specification of the mean structures. This research is motivated by the interesting work of Otsu 2008 to discriminate among semiparametric models by generalizing the KLoptimality criterion proposed by L'opezFidalgo et al. 2007 and Tommasi and L'opezFidalgo 2010. In our work we provide further important insights in this interesting optimality criterion. In particular, we propose a practical strategy for finding optimal discrimination designs among semiparametric models that can also be verified using an equivalence theorem. In addition, we study properties of such optimal designs and identify important cases where the proposed semiparametric optimal discrimination designs coincide with the celebrated T optimal designs.
Birds of a feather or opposites attract effects in network modelling ; We study properties of some standard network models when the population is split into two types and the connection pattern between the types is varied. The studied models are generalizations of the ErdHosR'enyi graph, the configuration model and a preferential attachment graph. For the ErdHosR'enyi graph and the configuration model, the focus is on the component structure. We derive expressions for the critical parameter, indicating when there is a giant component in the graph, and study the size of the largest component by aid of simulations. When the expected degrees in the graph are fixed and the connections are shifted so that more edges connect vertices of different types, we find that the critical parameter decreases. The size of the largest component in the supercritical regime can be both increasing and decreasing as the connections change, depending on the combination of types. For the preferential attachment model, we analyze the degree distributions of the two types and derive explicit expressions for the degree exponents. The exponents are confirmed by simulations that also illustrate other properties of the degree structure.
Fractional constitutive equation FACE for nonNewtonian fluid flow Theoretical description ; NonNewtonian fluid flow might be driven by spatially nonlocal velocity, the dynamics of which can be described by promising fractional derivative models. This short communication proposes a space FrActionalorder Constitutive Equation FACE that links viscous shear stress with the velocity gradient, and then interprets physical properties of nonNewtonian fluids for steady pipe flow. Results show that the generalized FACE model contains previous nonNewtonian fluid flow models as endmembers by simply adjusting the order of the fractional index, and a preliminary test shows that the FACE model conveniently captures the observed growth of shear stress for various velocity gradients. Further analysis of the velocity profile, frictional head loss, and Reynolds number using the FACE model also leads to analytical tools and criterion that can significantly extend standard models in quantifying the complex dynamics of nonNewtonian fluid flow with a wide range of spatially nonlocal velocities.
Bayesian analysis of CCDM Models ; Creation of Cold Dark Matter CCDM, in the context of Einstein Field Equations, leads to negative creation pressure, which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical tools, at light of SN Ia data Akaike Information Criterion AIC, Bayesian Information Criterion BIC and Bayesian Evidence BE. These approaches allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJOLambdaCDM model, however, neither of these, nor Gamma3alpha H0 model can be discarded from the current analysis. Three other scenarios are discarded either from poor fitting, either from excess of free parameters.
A Comprehensive Model of Usability ; Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality metamodel that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents.
Dark Matter and Collider Studies in the LeftRight Symmetric Model with VectorLike Leptons ; In the context of a leftright symmetric model, we introduce one full generation of vectorlike lepton doublets both left and righthanded together with their mirror doublets. We show that the lightest vectorlike neutrino in the model is righthanded, and can serve as the dark matter candidate. We find that the relic density, as well as the direct and indirect DM detection bounds, are satisfied for a large range of the parameter space of the model. In accordance with the parameter space, we then explore the possibility of detecting signals of the model at both the LHC and the ILC, in the pair production of the associated vectorlike charged leptons which decay into final states including dark matter. A comprehensive analysis of signal and backgrounds shows that the signals at the ILC, especially with polarized beams are likely to be visible for light vectorlike leptons, even with low luminosity, rendering our model highly predictable and experimentally testable.
Quantifying Retail Agglomeration using Diverse Spatial Data ; Newly available data on the spatial distribution of retail activities in cities makes it possible to build models formalized at the level of the single retailer. Current models tackle consumer location choices at an aggregate level and the opportunity new data offers for modeling at the retail unit level lacks a theoretical framework. The model we present here helps to address these issues. It is a particular case of the CrossNested Logit model, based on random utility theory built with the idea of quantifying the role of floor space and agglomeration in retail location choice. We test this model on the city of London the results are consistent with a super linear scaling of a retailer's attractiveness with its floor space, and with an agglomeration effect approximated as the total retail floorspace within a 325m radius from each shop.
Robust permanence for ecological equations with internal and external feedbacks ; Species experience both internal feedbacks with endogenous factors such as trait evolution and external feedbacks with exogenous factors such as weather. These feedbacks can play an important role in determining whether populations persist or communities of species coexist. To provide a general mathematical framework for studying these effects, we develop a theorem for coexistence for ecological models accounting for internal and external feedbacks. Specifically, we use average Lyapunov functions and Morse decompositions to develop sufficient and necessary conditions for robust permanence, a form of coexistence robust to large perturbations of the population densities and small structural perturbations of the models. We illustrate how our results can be applied to verify permanence in nonautonomous models, structured population models, including those with frequencydependent feedbacks, and models of ecoevolutionary dynamics. In these applications, we discuss how our results relate to previous results for models with particular types of feedbacks.
An Empirical Study of Language CNN for Image Captioning ; Language Models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a Language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with all the previous words and can model the longrange dependencies of history words, which are critical for image captioning. The effectiveness of our approach is validated on two datasets MS COCO and Flickr30K. Our extensive experimental results show that our method outperforms the vanilla recurrent neural network based language models and is competitive with the stateoftheart methods.
Jointly Extracting Relations with Class Ties via Effective Deep Ranking ; Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network CNN with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR not relation for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving stateoftheart performance.
Structured Sequence Modeling with Graph Convolutional Recurrent Networks ; This paper introduces Graph Convolutional Recurrent Network GCRN, a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks RNN to data structured by an arbitrary graph. Such structured sequences can represent series of frames in videos, spatiotemporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks CNN on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed.
Minimal nonuniversal EW extensions of the Standard Model a chiral multiparameter solution ; We report the most general expression for the chiral charges of a nonuniversal U1' with identical charges for the first two families but different charges for the third one. The model is minimal in the sense that only standard model fermions plus righthanded neutrinos are required. By imposing anomaly cancellation and constraints coming from Yukawa couplings we obtain two different solutions. In one of these solutions, the anomalies cancel between fermions in different families. These solutions depend on four independent parameters which result very useful for model building. We build different benchmark models in order to show the flexibility of the parameterization. We also report LHC and low energy constraints for these benchmark models.
Modelling mass distribution of the Milky Way galaxy using Gaia billionstar map ; The Milky Way galaxy is a typical spiral galaxy which consists of a black hole in its centre, a barred bulge and a disk which contains spiral arms. The complex structure of the Galaxy makes it extremely difficult and challenging to model its mass distribution, particularly for the Galactic disk which plays the most important role in the dynamics and evolution of the Galaxy. Conventionally an axisymmetric disk model with an exponential brightness distribution and a constant masstolight ratio is assumed for the Galactic disk. In order to generate a flat rotation curve, a dark halo has also to be included. Here, by using the recently released Gaia billionstar map, we propose a Galactic disk mass distribution model which is based on the star density distribution rather than the brightness and masstolight ratio. The model is characterized by two parameters, a bulge radius and a characteristic length. Using the mass distribution model and solving the Poisson equation of the Galaxy, we obtain a flat rotation curve which reproduces the key observed features with no need for a dark halo.
Determining H0 with the latest HII galaxy measurements ; We use the latest HII galaxy measurements to determine the value of H0 adopting a combination of modeldependent and modelindependent method. By constraining five cosmological models, we find that the obtained values of H0 are more consistent with the recent local measurement by Riess et al. 2016 hereafter R16 at 1sigma confidence level, and that these five models prefer a higher bestfit value of H0 than R16's result. To check the correctness of H0 values obtained by modeldependent method, for the first time, we implement the modelindependent Gaussian processes GP using the HII galaxy measurements. We find that the GP reconstructions also prefer a higher value of H0 than R16's result. Therefore, we conclude that the current HII galaxy measurements support a higher cosmic expansion rate.
Hybrid Stars in the Framework of different NJL Models ; We compute models for the equation of state EoS of the matter in the cores of hybrid stars. Hadronic matter is treated in the nonlinear relativistic meanfield approximation, and quark matter is modeled by threeflavor local and nonlocal NambuJonaLasinio NJL models with repulsive vector interactions. The transition from hadronic to quark matter is constructed by considering either a soft phase transition Gibbs construction or a sharp phase transition Maxwell construction. We find that highmass neutron stars with masses up to 2.12.4 Modot may contain a mixed phase with hadrons and quarks in their cores, if global charge conservation is imposed via the Gibbs conditions. However, if the Maxwell conditions is considered, the appearance of a pure quark matter core either destabilizes the star immediately commonly for nonlocal NJL models or leads to a very short hybrid star branch in the massradius relation generally for local NJL models.
Structured Attention Networks ; Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning endtoend training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard softselection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks a linearchain conditional random field and a graphbased parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention.
Understanding lowtemperature bulk transport in samarium hexaboride without relying on ingap bulk states ; We present a new model to explain the difference between the transport and spectroscopy gaps in samarium hexaboride SmB6, which has been a mystery for some time. We propose that SmB6 can be modeled as an intrinsic semiconductor with a depletion length that diverges at cryogenic temperatures. In this model, we find a selfconsistent solution to Poisson's equation in the bulk, with boundary conditions based on Fermi energy pinning due to surface charges. The solution yields band bending in the bulk; this explains the difference between the two gaps because spectroscopic methods measure the gap near the surface, while transport measures the average over the bulk. We also connect the model to transport parameters, including the Hall coefficient and thermopower, using semiclassical transport theory. The divergence of the depletion length additionally explains the 1012 K feature in data for these parameters, demonstrating a crossover from bulk dominated transport above this temperature to surfacedominated transport below this temperature. We find good agreement between our model and a collection of transport data from 440 K. This model can also be generalized to materials with similar band structure.
The Game Imitation Deep Supervised Convolutional Networks for Quick Video Game AI ; We present a visiononly model for gaming AI which uses a late integration deep convolutional network architecture trained in a purely supervised imitation learning context. Although stateoftheart deep learning models for video game tasks generally rely on more complex methods such as deepQ learning, we show that a supervised model which requires substantially fewer resources and training time can already perform well at human reaction speeds on the N64 classic game Super Smash Bros. We frame our learning task as a 30class classification problem, and our CNN model achieves 80 top1 and 95 top3 validation accuracy. With slight testtime finetuning, our model is also competitive during live simulation with the highestlevel AI built into the game. We will further show evidence through network visualizations that the network is successfully leveraging temporal information during inference to aid in decision making. Our work demonstrates that supervised CNN models can provide good performance in challenging policy prediction tasks while being significantly simpler and more lightweight than alternatives.
Quartet correlations in NZ nuclei induced by realistic twobody interactions ; Two variational quartet models previously employed in a treatment of pairing forces are extended to the case of a general twobody interaction. One model approximates the nuclear states as a condensate of identical quartets with angular momentum J0 and isospin T0 while the other let these quartets to be all different from each other. With these models we investigate the role of alphalike quartet correlations both in the ground state and in the lowest J0, T0 excited states of eveneven NZ nuclei in the sdshell. We show that the ground state correlations of these nuclei can be described to a good extent in terms of a condensate of alphalike quartets. This turns out to be especially the case for the nucleus 32S for which the overlap between this condensate and the shell model wave function is found close to one. In the same nucleus, a similar overlap is found also in the case of the first excited 0 state. No clear correspondence is observed instead between the second excited states of the quartet models and the shell model eigenstates in all the cases examined.
The bosonized version of the Schwinger model in four dimensions a blueprint for confinement ; For a 31dimensional generalization of the Schwinger model, we compute the interaction energy between two test charges. The result shows that the static potential profile contains a linear term leading to the confinement of probe charges, exactly as in the original model in two dimensions. We further show that the same 4dimensional model also appears as one version of the B wedge F models in 31 dimensions under dualization of Stueckelberglike massive gauge theories. Interestingly, this particular model is characterized by the mixing between a U1 potential and an Abelian 3form field of the type that appears in the topological sector of QCD.
Semiparametric Network Structure Discovery Models ; We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process GP prior on a networkindependent component, and random sparsity and weight matrices as the networkdependent parameters. This approach provides flexible modeling of networkindependent trends in the observations as well as uncertainty quantification around the discovered network structure. We establish a connection between our model and multitask GPs and develop an efficient stochastic variational inference algorithm for it. Furthermore, we formally show that our approach is numerically stable and in fact numerically easy to carry out almost everywhere on the support of the random variables involved. Finally, we evaluate our model on three applications, showing that it outperforms previous approaches. We provide a qualitative and quantitative analysis of the structures discovered for domains such as the study of the full genome regulation of the yeast Saccharomyces cerevisiae.
Deep Forest ; Current deep learning models are mostly build upon neural networks, i.e., multiple layers of parameterized differentiable nonlinear modules that can be trained by backpropagation. In this paper, we explore the possibility of building deep models based on nondifferentiable modules. We conjecture that the mystery behind the success of deep neural networks owes much to three characteristics, i.e., layerbylayer processing, inmodel feature transformation and sufficient model complexity. We propose the gcForest approach, which generates textitdeep forest holding these characteristics. This is a decision tree ensemble approach, with much less hyperparameters than deep neural networks, and its model complexity can be automatically determined in a datadependent way. Experiments show that its performance is quite robust to hyperparameter settings, such that in most cases, even across different data from different domains, it is able to get excellent performance by using the same default setting. This study opens the door of deep learning based on nondifferentiable modules, and exhibits the possibility of constructing deep models without using backpropagation.
Numberconserving interacting fermion models with exact topological superconducting ground states ; We present a method to construct numberconserving Hamiltonians whose ground states exactly reproduce an arbitrarily chosen BCStype meanfield state. Such parent Hamiltonians can be constructed not only for the usual swave BCS state, but also for more exotic states of this form, including the ground states of Kitaev wires and 2D topological superconductors. This method leads to infinite families of locallyinteracting fermion models with exact topological superconducting ground states. After explaining the general technique, we apply this method to construct two specific classes of models. The first one is a onedimensional double wire lattice model with Majoranalike degenerate ground states. The second one is a twodimensional pxipy superconducting model, where we also obtain analytic expressions for topologically degenerate ground states in the presence of vortices. Our models may provide a deeper conceptual understanding of how Majorana zero modes could emerge in condensed matter systems, as well as inspire novel routes to realize them in experiment.
SCYNet Testing supersymmetric models at the LHC with neural networks ; SCYNet SUSY Calculating Yield Net is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed one network has been trained using the parameters of the 11dimensional phenomenological Minimal Supersymmetric Standard Model pMSSM11 as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM11 fits without time penalty. In the second approach, the neural network has been trained using modelindependent signaturerelated objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. While the calculation of the energies and particle multiplicities takes up computation time, the corresponding neural network is more general and can be used to predict the LHC profile likelihood ratio for a wider class of new physics models.
Probabilistic ReducedOrder Modeling for Stochastic Partial Differential Equations ; We discuss a Bayesian formulation to coarsegraining CG of PDEs where the coefficients e.g. material parameters exhibit random, fine scale variability. The direct solution to such problems requires grids that are small enough to resolve this fine scale variability which unavoidably requires the repeated solution of very large systems of algebraic equations. We establish a physically inspired, datadriven coarsegrained model which learns a low dimensional set of microstructural features that are predictive of the finegrained model FG response. Once learned, those features provide a sharp distribution over the coarse scale effec tive coefficients of the PDE that are most suitable for prediction of the fine scale model output. This ultimately allows to replace the computationally expensive FG by a generative proba bilistic model based on evaluating the much cheaper CG several times. Sparsity enforcing pri ors further increase predictive efficiency and reveal microstructural features that are important in predicting the FG response. Moreover, the model yields probabilistic rather than singlepoint predictions, which enables the quantification of the unavoidable epistemic uncertainty that is present due to the information loss that occurs during the coarsegraining process.
Preliminary Study on BitString Modelling of Opinion Formation in Complex Networks ; Opinion formation has been gaining increasing research interests recently, and various models have been proposed. These models, however, have their limitations, among which noticeably include i it is generally assumed that adjacent nodes holding similar opinions will further reduce their difference in between, while adjacent nodes holding significantly different opinions would either do nothing, or cut the link in between them; ii opinion mutation, which describes opinion changes not due to neighborhood influences in real life, is typically random. While such models enjoy their simplicity and nevertheless help reveal lots of useful insights, they lack the capability of describing many complex behaviors which we may easily observe in real life. In this paper, we propose a new bitstring modeling approach. Preliminary study on the new model demonstrates its great potentials in revealing complex behaviors of social opinion evolution and formation.
Kahler moduli stabilization in semirealistic magnetized orbifold models ; We study Kahler moduli stabilizations in semirealistic magnetized Dbrane models based on Z2times Z2' toroidal orbifolds. In type IIB compactifications, 3form fluxes can stabilize the dilaton and complex structure moduli fields, but there remain some massless closed string moduli fields, Kahler moduli. The magnetic fluxes generate FayetIliopoulos terms, which can fix ratios of Kahler moduli. On top of that, we consider Dbrane instanton effects to stabilize them in concrete Dbrane models and investigate the brane configurations to confirm that the moduli fields can be stabilized successfully. In this paper, we treat two types of Dbrane models. One is based on D9brane systems respecting the PatiSalam model. The other is realized in a D7brane system breaking the PatiSalam gauge group. We find suitable configurations where the Dbrane instantons can stabilize the moduli fields within both types of Dbrane models, explaining an origin of a small constant term of the superpotential which is a key ingredient for successful moduli stabilizations.
Elucidating fluctuating diffusivity in centerofmass motion of polymer models with timeaveraged meansquaredisplacement tensor ; There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here, a novel method to elucidate the fluctuating diffusivity from trajectory data is developed. The timeaveraged mean square displacement MSD, a common tool in singleparticletracking SPT experiments, is generalized into a second order tensor, with which information on both magnitude fluctuations and orientational fluctuations of the diffusivity can be clearly detected. This new method is utilized to analyze the centerofmass motion of four polymer models, the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer, and it is found that these models exhibit distinctly different types of magnitude and orientational fluctuations of the diffusivity. This method of the timeaveraged MSD tensor can be applied also to the trajectory data obtained in SPT experiments.
Controlled Flavour Changing Neutral Couplings in Two Higgs Doublet Models ; We propose a class of Two Higgs Doublet Models where there are Flavour Changing Neutral Currents FCNC at tree level, but under control due to the introduction of a discrete symmetry in the full Lagrangian. It is shown that in this class of models, one can have simultaneously FCNC in the up and down sectors, in contrast to the situation encountered in BGL models. The intensity of FCNC is analysed and it is shown that in this class of models one can respect all the strong constraints from experiment without unnatural finetuning. It is pointed out that the additional sources of flavour and CP violation are such that they can enhance significantly the generation of the Baryon Asymmetry of the Universe, with respect to the Standard Model.
Modeling spatial processes with unknown extremal dependence class ; Many environmental processes exhibit weakening spatial dependence as events become more extreme. Wellknown limiting models, such as maxstable or generalized Pareto processes, cannot capture this, which can lead to a preference for models that exhibit a property known as asymptotic independence. However, weakening dependence does not automatically imply asymptotic independence, and whether the process is truly asymptotically independent is usually far from clear. The distinction is key as it can have a large impact upon extrapolation, i.e., the estimated probabilities of events more extreme than those observed. In this work, we present a single spatial model that is able to capture both dependence classes in a parsimonious manner, and with a smooth transition between the two cases. The model covers a wide range of possibilities from asymptotic independence through to complete dependence, and permits weakening dependence of extremes even under asymptotic dependence. Censored likelihoodbased inference for the implied copula is feasible in moderate dimensions due to closedform margins. The model is applied to oceanographic datasets with ambiguous true limiting dependence structure.
Structure Preserving Model Reduction of Parametric Hamiltonian Systems ; While reducedorder models ROMs have been popular for efficiently solving large systems of differential equations, the stability of reduced models over longtime integration is of present challenges. We present a greedy approach for ROM generation of parametric Hamiltonian systems that captures the symplectic structure of Hamiltonian systems to ensure stability of the reduced model. Through the greedy selection of basis vectors, two new vectors are added at each iteration to the linear vector space to increase the accuracy of the reduced basis. We use the error in the Hamiltonian due to model reduction as an error indicator to search the parameter space and identify the next best basis vectors. Under natural assumptions on the set of all solutions of the Hamiltonian system under variation of the parameters, we show that the greedy algorithm converges with exponential rate. Moreover, we demonstrate that combining the greedy basis with the discrete empirical interpolation method also preserves the symplectic structure. This enables the reduction of the computational cost for nonlinear Hamiltonian systems. The efficiency, accuracy, and stability of this model reduction technique is illustrated through simulations of the parametric wave equation and the parametric Schrodinger equation.
Bayesian Repulsive Gaussian Mixture Model ; We develop a general class of Bayesian repulsive Gaussian mixture models that encourage wellseparated clusters, aiming at reducing potentially redundant components produced by independent priors for locations such as the Dirichlet process. The asymptotic results for the posterior distribution of the proposed models are derived, including posterior consistency and posterior contraction rate in the context of nonparametric density estimation. More importantly, we show that compared to the independent prior on the component centers, the repulsive prior introduces additional shrinkage effect on the tail probability of the posterior number of components, which serves as a measurement of the model complexity. In addition, an efficient and easytoimplement blockedcollapsed Gibbs sampler is developed based on the exchangeable partition distribution and the corresponding urn model. We evaluate the performance and demonstrate the advantages of the proposed model through extensive simulation studies and real data analysis. The R code is available at httpsdrive.google.comopenid0BzFse0eqxBHZnF5cEhsUFk0cVE.
Community Detection and Stochastic Block Models ; The stochastic block model SBM is a random graph model with different group of vertices connecting differently. It is widely employed as a canonical model to study clustering and community detection, and provides a fertile ground to study the informationtheoretic and computational tradeoffs that arise in combinatorial statistics and more generally data science. This monograph surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to informationtheoretic and computational tradeoffs, and for various recovery requirements such as exact, partial and weak recovery. The main results discussed are the phase transitions for exact recovery at the ChernoffHellinger threshold, the phase transition for weak recovery at the KestenStigum threshold, the optimal SNRmutual information tradeoff for partial recovery, and the gap between informationtheoretic and computational thresholds. The monograph gives a principled derivation of the main algorithms developed in the quest of achieving the limits, in particular tworound algorithms via graphsplitting, semidefinite programming, linearized belief propagation, classicalnonbacktracking spectral methods and graph powering. Extensions to other block models, such as geometric block models, and a few open problems are also discussed.
The ALF Algorithms for Lattice Fermions project release 1.0. Documentation for the auxiliary field quantum Monte Carlo code ; The Algorithms for Lattice Fermions package provides a general code for the finite temperature auxiliary field quantum Monte Carlo algorithm. The code is engineered to be able to simulate any model that can be written in terms of sums of singlebody operators, of squares of singlebody operators and singlebody operators coupled to an Ising field with given dynamics. We provide predefined types that allow the user to specify the model, the Bravais lattice as well as equal time and time displaced observables. The code supports an MPI implementation. Examples such as the Hubbard model on the honeycomb lattice and the Hubbard model on the square lattice coupled to a transverse Ising field are provided and discussed in the documentation. We furthermore discuss how to use the package to implement the Kondo lattice model and the SUNHubbardHeisenberg model. One can download the code from our Git instance at httpsalf.physik.uniwuerzburg.de and sign in to file issues.
Entanglement entropy and computational complexity of the Anderson impurity model out of equilibrium I quench dynamics ; We study the growth of entanglement entropy in density matrix renormalization group calculations of the realtime quench dynamics of the Anderson impurity model. We find that with appropriate choice of basis, the entropy growth is logarithmic in both the interacting and noninteracting singleimpurity models. The logarithmic entropy growth is understood from a noninteracting chain model as a critical behavior separating regimes of linear growth and saturation of entropy, corresponding respectively to an overlapping and gapped energy spectra of the set of bath states. We find that with an appropriate choices of basis energyordered bath orbitals, logarithmic entropy growth is the generic behavior of quenched impurity models. A noninteracting calculation of a doubleimpurity Anderson model supports the conclusion in the multiimpurity case. The logarithmic growth of entanglement entropy enables studies of quench dynamics to very long times.
Froissart bound and selfsimilarity based models of proton structure functions ; Froissart Bound implies that the total protonproton crosssection or equivalently structure function cannot rise faster than the logarithmic growth log2 s sim log2 1x, where textits is the square of the center of mass energy and textitx is the Bjorken variable. Compatibility of such behavior with the notion of selfsimilarity in a model of structure function suggested by us sometime back is now generalized to more recent improved selfsimilarity based models and compare with recent data as well as with the model of Block, Durand, Ha and McKay. Our analysis suggests that Froissart bound compatible selfsimilarity based models are possible with log2 1x rise in limited xQ2 ranges of HERA data, but their phenomenological ranges validity are narrower than the corresponding models having power law rise in 1x.
Holographic Dark Energy from FluidGravity Duality Constraint by Cosmological Observations ; In this paper, we obtain a holographic model of dark energy using the fluidgravity duality. This model will be dual to a higher dimensional Schwarzschild black hole, and we would use fluidgravity duality to relate to the parameters of this black hole to such a cosmological model. We will also analyze the thermodynamics of such a solution, and discuss the stability model. Finally, we use cosmological data to constraint the parametric space of this dark energy model. Thus, we will use observational data to perform cosmography for this holographic model based on fluidgravity duality.
Liquid Splash Modeling with Neural Networks ; This paper proposes a new datadriven approach to model detailed splashes for liquid simulations with neural networks. Our model learns to generate smallscale splash detail for the fluidimplicitparticle method using training data acquired from physically parametrized, high resolution simulations. We use neural networks to model the regression of splash formation using a classifier together with a velocity modifier. For the velocity modification, we employ a heteroscedastic model. We evaluate our method for different spatial scales, simulation setups, and solvers. Our simulation results demonstrate that our model significantly improves visual fidelity with a large amount of realistic droplet formation and yields splash detail much more efficiently than finer discretizations.
Automaton model of protein dynamics of conformational and functional states ; In this conceptual paper we propose to explore the analogy between onticepistemic description of quantum phenomena and interrelation between dynamics of conformational and functional states of proteins. Another new idea is to apply theory of automata to model the latter dynamics. In our model protein's behavior is modeled with the aid of two dynamical systems, ontic and epistemic, which describe evolution of conformational and functional states of proteins, respectively. The epistemic automaton is constructed from the ontic automaton on the basis of functional observational equivalence relation on the space of ontic states. This reminds a few approaches to emergent quantum mechanics in which a quantum epistemic state is treated as representing a class of prequantum ontic states. This approach does not match to the standard it protein structurefunction paradigm. However, it is perfect for modeling of behavior of intrinsically disordered proteins. Mathematically space of protein's ontic states conformational states is modeled with the aid of padic numbers or more general ultrametric spaces encoding the internal hierarchical structure of proteins. Connection with theory of padic dynamical systems is briefly discussed.
A note on MCMC for nested multilevel regression models via belief propagation ; In the quest for scalable Bayesian computational algorithms we need to exploit the full potential of existing methodologies. In this note we point out that message passing algorithms, which are very well developed for inference in graphical models, appear to be largely unexplored for scalable inference in Bayesian multilevel regression models. We show that nested multilevel regression models with Gaussian errors lend themselves very naturally to the combined use of belief propagation and MCMC. Specifically, the posterior distribution of the regression parameters conditionally on covariance hyperparameters is a highdimensional Gaussian that can be sampled exactly as well as marginalized using belief propagation at a cost that scales linearly in the number of parameters and data. We derive an algorithm that works efficiently even for conditionally singular Gaussian distributions, e.g., when there are linear constraints between the parameters at different levels. We show that allowing for such noninvertible Gaussians is critical for belief propagation to be applicable to a large class of nested multilevel models. From a different perspective, the methodology proposed can be seen as a generalization of forwardbackward algorithms for sampling to multilevel regressions with treestructure graphical models, as opposed to singlebranch trees used in classical Kalman filter contexts.
Information vs. Uncertainty as the Foundation for a Science of Environmental Modeling ; Information accounting provides a better foundation for hypothesis testing than does uncertainty quantification. A quantitative account of science is derived under this perspective that alleviates the need for epistemic bridge principles, solves the problem of ad hoc falsification criteria, and deals with verisimilitude by facilitating a general approach to processlevel diagnostics. Our argument is that the wellknown inconsistencies of both Bayesian and classical statistical hypothesis tests are due to the fact that probability theory is an insufficient logic of science. Information theory, as an extension of probability theory, is required to provide a complete logic on which to base quantitative theories of empirical learning. The organizing question in this case becomes not whether our theories or models are more or less true, or about how much uncertainty is associated with a particular model, but instead whether there is any information available from experimental data that might allow us to improve the model. This becomes a formal hypothesis test, provides a theory of model diagnostics, and suggests a new approach to building dynamical systems models.
Variational Convergence of Discrete GeometricallyIncompatible Elastic Models ; We derive a continuum model for incompatible elasticity as a variational limit of a family of discrete nearestneighbor elastic models. The discrete models are based on discretizations of a smooth Riemannian manifold M,mathfrakg, endowed with a flat, symmetric connection nabla. The metric mathfrakg determines local equilibrium distances between neighboring points; the connection nabla induces a lattice structure shared by all the discrete models. The limit model satisfies a fundamental rigidity property there are no stressfree configurations, unless mathfrakg is flat, i.e., has zero Riemann curvature. Our analysis focuses on twodimensional systems, however, all our results readily generalize to higher dimensions.
Introduction to finite mixtures ; Mixture models have been around for over 150 years, as an intuitively simple and practical tool for enriching the collection of probability distributions available for modelling data. In this chapter we describe the basic ideas of the subject, present several alternative representations and perspectives on these models, and discuss some of the elements of inference about the unknowns in the models. Our focus is on the simplest setup, of finite mixture models, but we discuss also how various simplifying assumptions can be relaxed to generate the rich landscape of modelling and inference ideas traversed in the rest of this book.
Learning Distributed Representations of Texts and Entities from Knowledge Base ; We describe a neural network model that jointly learns distributed representations of texts and knowledge base KB entities. Given a text in the KB, we train our proposed model to predict entities that are relevant to the text. Our model is designed to be generic with the ability to address various NLP tasks with ease. We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. We evaluated the model on three important NLP tasks i.e., sentence textual similarity, entity linking, and factoid question answering involving both unsupervised and supervised settings. As a result, we achieved stateoftheart results on all three of these tasks. Our code and trained models are publicly available for further academic research.
CHAM action recognition using convolutional hierarchical attention model ; Recently, the soft attention mechanism, which was originally proposed in language processing, has been applied in computer vision tasks like image captioning. This paper presents improvements to the soft attention model by combining a convolutional LSTM with a hierarchical system architecture to recognize action categories in videos. We call this model the Convolutional Hierarchical Attention Model CHAM. The model applies a convolutional operation inside the LSTM cell and an attention map generation process to recognize actions. The hierarchical architecture of this model is able to explicitly reason on multigranularities of action categories. The proposed architecture achieved improved results on three publicly available datasets the UCF sports dataset, the Olympic sports dataset and the HMDB51 dataset.
Competition between Chaotic and NonChaotic Phases in a Quadratically Coupled SachdevYeKitaev Model ; The SachdevYeKitaev SYK model is a concrete solvable model to study nonFermi liquid properties, holographic duality and maximally chaotic behavior. In this work, we consider a generalization of the SYK model that contains two SYK models with different number of Majorana modes coupled by quadratic terms. This model is also solvable, and the solution shows a zerotemperature quantum phase transition between two nonFermi liquid chaotic phases. This phase transition is driven by tuning the ratio of two mode numbers, and a Fermi liquid nonchaotic phase sits at the critical point with equal mode number. At finite temperature, the Fermi liquid phase expands to a finite regime. More intriguingly, a different nonFermi liquid phase emerges at finite temperature. We characterize the phase diagram in term of the spectral function, the Lyapunov exponent and the entropy. Our results illustrate a concrete example of quantum phase transition and critical regime between two nonFermi liquid phases.
Neural Models for Documents with Metadata ; Most realworld document collections involve various types of metadata, such as author, source, and date, and yet the most commonlyused approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customization typically requires derivation of a custom inference algorithm. In this paper, we build on recent advances in variational inference methods and propose a general neural framework, based on topic models, to enable flexible incorporation of metadata and allow for rapid exploration of alternative models. Our approach achieves strong performance, with a manageable tradeoff between perplexity, coherence, and sparsity. Finally, we demonstrate the potential of our framework through an exploration of a corpus of articles about US immigration.
Variations of Checking Stack Automata Obtaining Unexpected Decidability Properties ; We introduce a model of oneway language acceptors a variant of a checking stack automaton and show the following decidability properties 1 The deterministic version has a decidable membership problem but has an undecidable emptiness problem. 2 The nondeterministic version has an undecidable membership problem and emptiness problem. There are many models of accepting devices for which there is no difference with these problems between deterministic and nondeterministic versions, and the same holds for the emptiness problem. As far as we know, the model we introduce above is the first oneway model to exhibit properties 1 and 2. We define another family of oneway acceptors where the nondeterministic version has an undecidable emptiness problem, but the deterministic version has a decidable emptiness problem. We also know of no other model with this property in the literature. We also investigate decidability properties of other variations of checking stack automata e.g., allowing multiple stacks, twoway input, etc.. Surprisingly, twoway deterministic machines with multiple checking stacks and multiple reversalbounded counters are shown to have a decidable membership problem, a very general model with this property.
RenewalTheoretic Packet Collision Modeling under LongTailed Heterogeneous Traffic ; Internetofthings IoT, with the vision of billions of connected devices, is bringing a massively heterogeneous character to wireless connectivity in unlicensed bands. The heterogeneity in medium access parameters, transmit power and activity levels among the coexisting networks leads to detrimental crosstechnology interference. The stochastic traffic distributions, shaped under CSMACA rules, of an interfering network and channel fading makes it challenging to model and analyze the performance of an interfered network. In this paper, to study the temporal interaction between the traffic distributions of two coexisting networks, we develop a renewaltheoretic packet collision model and derive a generic collisiontime distribution CTD function of an interfered system. The CTD function holds for any busy and idletime distributions of the coexisting traffic. As the earlier studies suggest a longtailed idletime statistics in real environments, the developed model only requires the Laplace transform of longtailed distributions to find the CTD. Furthermore, we present a packet error rate PER model under the proposed CTD and multipath fading of the interfering signals. Using this model, a computationally efficient PER approximation for interferencelimited case is developed to analyze the performance of an interfered link.
Quantum matter bounce with a dark energy expanding phase ; Analyzing quantum cosmological scenarios containing one scalar field with exponential potential, we have obtained a universe model which realizes a classical dust contraction from very large scales, the initial repeller of the model, and moves to a stiff matter contraction near the singularity, which is avoided due to a quantum bounce. The universe is then launched in a stiff matter expanding phase, which then moves to a dark energy era, finally returning to the dust expanding phase, the final attractor of the model. Hence one has obtained a nonsingular cosmological model where a single scalar field can describe both the matter contracting phase of a bouncing model, necessary to give an almost scale invariant spectrum of scalar cosmological perturbations, and a transient expanding dark energy phase. As the universe is necessarily dust dominated in the far past, usual adiabatic vacuum initial conditions can be easily imposed in this era, avoiding the usual issues appearing when dark energy is considered in bouncing models.
Stability Theory of Stochastic Models in Opinion Dynamics ; We consider a certain class of nonlinear maps that preserve the probability simplex, i.e., stochastic maps, that are inspired by the DeGrootFriedkin model of beliefopinion propagation over influence networks. The corresponding dynamical models describe the evolution of the probability distribution of interacting species. Such models where the probability transition mechanism depends nonlinearly on the current state are often referred to as em nonlinear Markov chains. In this paper we develop stability results and study the behavior of representative opinion models. The stability certificates are based on the contractivity of the nonlinear evolution in the ell1metric. We apply the theory to two types of opinion models where the adaptation of the transition probabilities to the current state is exponential and linear, respectivelyboth of these can display a wide range of behaviors. We discuss continuoustime and other generalizations.
Synaptic mechanisms of interference in working memory ; Information from preceding trials of cognitive tasks can bias performance in the current trial, a phenomenon referred to as interference. Subjects performing visual working memory tasks exhibit interference in their trialtotrial response correlations the recalled target location in the current trial is biased in the direction of the target presented on the previous trial. We present modeling work that a develops a probabilistic inference model of this historydependent bias, and b links our probabilistic model to computations of a recurrent network wherein shortterm facilitation accounts for the dynamics of the observed bias. Network connectivity is reshaped dynamically during each trial, providing a mechanism for generating predictions from prior trial observations. Applying timescale separation methods, we can obtain a lowdimensional description of the trialtotrial bias based on the history of target locations. The model has response statistics whose mean is centered at the true target location across many trials, typical of such visual working memory tasks. Furthermore, we demonstrate task protocols for which the plastic model performs better than a model with static connectivity repetitively presented targets are better retained in working memory than targets drawn from uncorrelated sequences.
Analisis de cointegracion con una aplicacion al mercado de deuda en Estados Unidos, Canada y Mexico ; Certain theoretical aspects of vector autoregression VAR as tools to model economic time series are revised, in particular their capacity to include both short term and long term information. The VAR model, in its error correction form, is derived and the permanenttransitory decomposition of factors proposed by Gonzalo and Granger 1995 studied. An introductory exposition of estimation theory for reduced rank models, necessary to estimate the error correction model, is given. Cointegration analysis using the VAR model is carried out for government bond interest rates short, medium and long term of the United States, Mexico and Canada, with the objective of finding the longterm common factors that drive the system. The error correction model of this system is estimated using Johansen's method. Using this estimation the permanenttransitory decomposition of the system is calculated. Hypothesis tests are carried out on permanent factors to determine which of the nine rates studied drive the system.
Anisotropic cosmological models with two fluids ; In this paper, aniostropic dark energy cosmological models have been constructed in a BianchiV spacetime with the energy momentum tensor consisting of two noninteracting fluids namely bulk viscous fluid and dark energy fluid. Two different models are constructed based on the power law cosmology and de Sitter universe. The constructed model also embedded with different pressure gradients along different spatial directions. The variable equation of state EoS parameter, skewness parameters for both the models are obtained and analyzed. The physical properties of the models obtained with the use of scale factors of power law and de Sitter law are also presented.
Anapole dark matter annihilation into photons ; In models of anapole dark matter DM, the DM candidate is a Majorana fermion whose primary interaction with standard model SM particles is through an anapole coupling to offshell photons. As such, at treelevel, anapole DM undergoes pwave annihilation into SM charged fermions via a virtual photon. But, generally, Majorana fermions are polarizable, coupling to two real photons. This fact admits the possibility that anapole DM can annihilate into two photons in an swave process. Using an explicit model, we compute both the treelevel and diphoton contributions to the anapole DM annihilation cross section. Depending on model parameters, the swave process can either rival or be dwarfed by the pwave contribution to the total annihilation cross section. Subjecting the model to astrophysical upper bounds on the swave annihilation mode, we rule out the model with large swave annihilation.
A minimalistic pure spinor sigmamodel in AdS ; The bghost of the pure spinor formalism in a general curved background is not holomorphic. For such theories, the construction of the string measure requires the knowledge of the action of diffeomorphisms on the BV phase space. We construct such an action for the pure spinor sigmamodel in AdS5times S5. From the point of view of the BV formalism, this sigmamodel belongs to the class of theories where the expansion of the Master Action in antifields terminates at the quadratic order. We show that it can be reduced to a simpler degenerate sigmamodel, preserving the AdS symmetries. We construct the action of the algebra of worldsheet vector fields on the BV phase space of this minimalistic sigmamodel, and explain how to lift it to the original model.
LRS Bianchi typeI cosmological model with constant deceleration parameter in fR,T gravity ; A spatially homogeneous anisotropic LRS Bianchi typeI cosmological model is studied in fR,T gravity with a special form of Hubble's parameter, which leads to constant deceleration parameter. The parameters involved in the considered form of Hubble parameter can be tuned to match, our models with the Lambda CDM model. With the present observed value of the deceleration parameter, we have discussed physical and kinematical properties of a specific model. Moreover, we have discussed the cosmological distances for our model.
Backward bifurcation in SIRS malaria model ; We present a deterministic mathematical model for malaria transmission with waning immunity. The model consists of five nonlinear system of differential equations. We used next generation matrix to derive the basic reproduction number R0. The disease free equilibrium was computed and its local stability has been shown by the virtue of the Jacobean matrix. Moreover, using Lyapunov function theory and LaSalle Invariance Principle we have proved that the disease free equilibrium is globally asymptotically stable. Conditions for existence of endemic equilibrium point have been established. A qualitative study based on bifurcation theory reveals that backward bifurcation occur in the model. The stable disease free equilibrium of the model coexists with the stable endemic equilibrium when R01. Furthermore, we have shown that bringing the number of disease malaria induced death rate below some threshold is sufficient enough to eliminate backward bifurcation in the model.
MDNet A Semantically and Visually Interpretable Medical Image Diagnosis Network ; The inability to interpret the model prediction in semantically and visually meaningful ways is a wellknown shortcoming of most existing computeraided diagnosis methods. In this paper, we propose MDNet to establish a direct multimodal mapping between medical images and diagnostic reports that can read images, generate diagnostic reports, retrieve images by symptom descriptions, and visualize attention, to provide justifications of the network diagnosis process. MDNet includes an image model and a language model. The image model is proposed to enhance multiscale feature ensembles and utilization efficiency. The language model, integrated with our improved attention mechanism, aims to read and explore discriminative image feature descriptions from reports to learn a direct mapping from sentence words to image pixels. The overall network is trained endtoend by using our developed optimization strategy. Based on a pathology bladder cancer images and its diagnostic reports BCIDR dataset, we conduct sufficient experiments to demonstrate that MDNet outperforms comparative baselines. The proposed image model obtains stateoftheart performance on two CIFAR datasets as well.
Learning the MMSE Channel Estimator ; We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shiftinvariance structures, the complexity of the MMSE channel estimator can be reduced to OM log M floating point operations, where M is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient but suboptimal estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.
A variational method for integrabilitybreaking RichardsonGaudin models ; We present a variational method for approximating the ground state of spin models close to RichardsonGaudin integrability. This is done by variationally optimizing eigenstates of integrable RichardsonGaudin models, where the toolbox of integrability allows for an efficient evaluation and minimization of the energy functional. The method is shown to return exact results for integrable models and improve substantially on perturbation theory for models close to integrability. For large integrabilitybreaking interactions, it is shown how avoided level crossings necessitate the use of excited states of integrable Hamiltonians in order to accurately describe the ground states of general nonintegrable models.
A Sequential Model for Classifying Temporal Relations between IntraSentence Events ; We present a sequential model for temporal relation classification between intrasentence events. The key observation is that the overall syntactic structure and compositional meanings of the multiword context between events are important for distinguishing among finegrained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a partsofspeech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network LSTM models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.
On the Poisson Trick and its Extensions for Fitting Multinomial Regression Models ; This article is concerned with the fitting of multinomial regression models using the socalled Poisson Trick. The work is motivated by Chen Kuo 2001 and MalchowMoller Svarer 2003 which have been criticized for being computationally inefficient and sometimes producing nonsense results. We first discuss the case of independent data and offer a parsimonious fitting strategy when all covariates are categorical. We then propose a new approach for modelling correlated responses based on an extension of the GammaPoisson model, where the likelihood can be expressed in closedform. The parameters are estimated via an ExpectationConditional Maximization ECM algorithm, which can be implemented using functions for fitting generalized linear models readily available in standard statistical software packages. Compared to existing methods, our approach avoids the need to approximate the intractable integrals and thus the inference is exact with respect to the approximating GammaPoisson model. The proposed method is illustrated via a reanalysis of the yogurt data discussed by Chen Kuo 2001.
Standard Model vacuum decay in a de Sitter Background ; We present a calculation of thickwall ColemandeLuccia CdL bounces in the Standard Model effective potential in a de Sitter background. The calculation is performed including the effect of the bounce backreaction on the metric, which we compare with the case of a fixed deSitter background, and with similar fullbackreaction calculation in a model polynomial potential. The results show that the Standard Model potential exhibits nontrivial behavior rather than a single CdL solution, there are multiple nonoscillating bounce solutions which may contribute to the decay rate. All the extra solutions found have higher actions than the largest amplitude solution, and thus would not contribute significantly to the decay rate, but their existence demonstrates that CdL solutions in the Standard Model potential are not unique, and the existence of additional, lower action, solutions cannot be ruled out. This suggests that a better understanding of the appearance and disappearance of CdL solutions in de Sitter space is needed to fully understand the vacuum instability issue in the Standard Model.
Convergence of LebenbergMarquard method for the Inverse Problem with an Interior Measurement ; The convergence of LevenbergMarquard method is discussed for the inverse problem to reconstruct the storage modulus and loss modulus for the so called scalar model by single interior measurement. The scalar model is the most simplest model for data analysis used as the modeling partial differential equation in the diagnosing modality called the magnetic resonance elastography which is used to diagnose for instance lever cancer. The convergence of the method is proved by showing that the measurement map which maps the above unknown moduli to the measured data satisfies the so called the tangential cone condition. The argument of the proof is quite general and in principle can be applied to any similar inverse problem to reconstruct the unknown coefficients of the model equation given as a partial differential equation of divergence form by one single interior measurement. The performance of the method is numerically tested for the two layered picewise homogneneous scalar model in a rectangular domain.
A Latent Variable Model for TwoDimensional Canonical Correlation Analysis and its Variational Inference ; Describing the dimension reduction DR techniques by means of probabilistic models has recently been given special attention. Probabilistic models, in addition to a better interpretability of the DR methods, provide a framework for further extensions of such algorithms. One of the new approaches to the probabilistic DR methods is to preserving the internal structure of data. It is meant that it is not necessary that the data first be converted from the matrix or tensor format to the vector format in the process of dimensionality reduction. In this paper, a latent variable model for matrixvariate data for canonical correlation analysis CCA is proposed. Since in general there is not any analytical maximum likelihood solution for this model, we present two approaches for learning the parameters. The proposed methods are evaluated using the synthetic data in terms of convergence and quality of mappings. Also, real data set is employed for assessing the proposed methods with several probabilistic and noneprobabilistic CCA based approaches. The results confirm the superiority of the proposed methods with respect to the competing algorithms. Moreover, this model can be considered as a framework for further extensions.
A Two Factor Forward Curve Model with Stochastic Volatility for Commodity Prices ; We describe a model for evolving commodity forward prices that incorporates three important dynamics which appear in many commodity markets mean reversion in spot prices and the resulting Samuelson effect on volatility term structure, decorrelation of moves in different points on the forward curve, and implied volatility skew and smile. This model is a forward curve model it describes the stochastic evolution of forward prices rather than a spot model that models the evolution of the spot commodity price. Two Brownian motions drive moves across the forward curve, with a third Hestonlike stochastic volatility process scaling instantaneous volatilities of all forward prices. In addition to an efficient numerical scheme for calculating European vanilla and earlyexercise option prices, we describe an algorithm for Monte Carlobased pricing of more generic derivative payoffs which involves an efficient approximation for the risk neutral drift that avoids having to simulate drifts for every forward settlement date required for pricing.
Training Deep AutoEncoders for Collaborative Filtering ; This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous stateofthe art models on a timesplit Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained endtoend without any layerwise pretraining. We empirically demonstrate that a deep autoencoder models generalize much better than the shallow ones, b nonlinear activation functions with negative parts are crucial for training deep models, and c heavy use of regularization techniques such as dropout is necessary to prevent overfiting. We also propose a new training algorithm based on iterative output refeeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at httpsgithub.comNVIDIADeepRecommender
Minimum message length inference of the Poisson and geometric models using heavytailed prior distributions ; Minimum message length is a general Bayesian principle for model selection and parameter estimation that is based on information theory. This paper applies the minimum message length principle to a smallsample model selection problem involving Poisson and geometric data models. Since MML is a Bayesian principle, it requires prior distributions for all model parameters. We introduce three candidate prior distributions for the unknown model parameters with both light and heavytails. The performance of the MML methods is compared with objective Bayesian inference and minimum description length techniques based on the normalized maximum likelihood code. Simulations show that our MML approach with a heavytail prior distribution provides an excellent performance in all tests.
Bayesian Network Regularized Regression for Modeling Urban Crime Occurrences ; This paper considers the problem of statistical inference and prediction for processes defined on networks. We assume that the network is known and measures similarity, and our goal is to learn about an attribute associated with its vertices. Classical regression methods are not immediately applicable to this setting, as we would like our model to incorporate information from both network structure and pertinent covariates. Our proposed model consists of a generalized linear model with vertex indexed predictors and a basis expansion of their coefficients, allowing the coefficients to vary over the network. We employ a regularization procedure, cast as a prior distribution on the regression coefficients under a Bayesian setup, so that the predicted responses vary smoothly according to the topology of the network. We motivate the need for this model by examining occurrences of residential burglary in Boston, Massachusetts. Noting that crime rates are not spatially homogeneous, and that the rates appear to vary sharply across regions in the city, we construct a hierarchical model that addresses these issues and gives insight into spatial patterns of crime occurrences. Furthermore, we examine efficient expectationmaximization fitting algorithms and provide computationallyfriendly methods for eliciting hyperprior parameters.
Classification of Radiology Reports Using Neural Attention Models ; The electronic health record EHR contains a large amount of multidimensional and unstructured clinical data of significant operational and research value. Distinguished from previous studies, our approach embraces a doubleannotated dataset and strays away from obscure blackbox models to comprehensive deep learning models. In this paper, we present a novel neural attention mechanism that not only classifies clinically important findings. Specifically, convolutional neural networks CNN with attention analysis are used to classify radiology head computed tomography reports based on five categories that radiologists would account for in assessing acute and communicable findings in daily practice. The experiments show that our CNN attention models outperform nonneural models, especially when trained on a larger dataset. Our attention analysis demonstrates the intuition behind the classifier's decision by generating a heatmap that highlights attended terms used by the CNN model; this is valuable when potential downstream medical decisions are to be performed by human experts or the classifier information is to be used in cohort construction such as for epidemiological studies.